Project

General

Profile

Actions

AFS Server

Important reminders

Tasks

Desired Hardware

Instructions

Guides/links

Getting started

  • Install Ubuntu 12.04LTS
  • Install Kerberos KDC (directions at Kerberos)
  • Setup DNS autodiscovery
    rnd.ru.is.        IN    AFSDB    1 afsdb1.rnd.ru.is.
    
  • Ubuntu
    • client configuration is in /etc/openafs
    • server configuration is in /etc/openafs/server
  • CentOS
    • client configuration is in /var/vice/etc (the traditional)
    • server configuration is in /usr/afs/etc

Client

Important! Do this first!

see AFS_Client_Installation

Server

  1. Packages
    sudo apt-get install openafs-krb5 openafs-{fileserver,dbserver}

Keys and accounts

  1. Add entries to /etc/openafs/CellServDB and /etc/openafs/server/CellServDB
    1. Note that the AFS cell MUST be lowercase
      >rnd.ru.is            # Reykjavik University
      130.208.209.37            #afsdb1.rnd.objid.net 
      130.208.209.39            #afsdb2.rnd.objid.net 
      130.208.209.40            #afsdb3.rnd.objid.net 
      
  2. Edit your /etc/krb5.conf
           
    [libdefaults]
    dns_lookup_kdc = true
    dns_lookup_realm = true
    
    [realms]
    RND.RU.IS = {
                    kdc = kerberos.rnd.ru.is
                    kdc = kerberos-1.rnd.ru.is
                    kdc = kerberos-2.rnd.ru.is
                    admin_server = kerberos.rnd.ru.is
    
  1. Add principal “afs/rnd.ru.is” and import the key to /etc/openafs/afs.keytab. This is also a good time to setup the normal keytab. Replace HOSTNAME with the reverse resolvable hostname. If this is on greenqloud, you will need to make both the external name and the internal and add it to the keytab.
    1. WARNING: If you run the kadmin, it will increment the kerberos version number, which will not allow the new servers to talk to the old. Instead, copy the afs.keytab using scp to all of the machines!
      kadmin.local
      kadmin: addprinc -policy service -randkey -e des-cbc-crc:normal afs/rnd.ru.is
      kadmin: ktadd -k /etc/openafs/afs.keytab -e des-cbc-crc:normal afs/rnd.ru.is
      kadmin: ank -policy host -randkey host/HOSTNAME
      kadmin: ktadd host/HOSTNAME
      kadmin: ank -policy host -randkey host/HOSTNAME
      kadmin: ktadd host/INTERNALHOSTNAME
      
  2. Remember the KVNO (key version number)
    klist -ke /etc/openafs/afs.keytab
  3. Import the secret key into the AFS system. Replace KVNO with the version number
    asetkey add KVNO /etc/openafs/afs.keytab afs/rnd.ru.is
    
  4. Copy the keytab to use the improved security
    cp /etc/openafs/afs.keytab /etc/openafs/server/rxkad.keytab

    #Now test with bos (afs-fileserver must be running, possibly restarted)
    sudo service openafs-fileserver restart
    sudo bos listkeys afsdb1 -localauth
    #key 3 has cksum 2586520638
    #Keys last changed on Fri Mar 30 02:10:25 2012.
    #All done.
    
  5. Setup kerberized root shell access. Replace my principal with yours or add it to the list.
    sudo vi /root/.krb5login
    foley@RND.RU.IS
    
  6. Test
    ksu

Partitions for vice (AFS cell)

  1. Make the partitions in your filesystem (as an image)
    cd /home
    sudo dd if=/dev/zero of=vicepa.img bs=100M count=80   # (8 GB partition)
    sudo mkfs.ext4 vicepa.img
    sudo sh -c "echo '/home/vicepa.img /vicepa ext4 defaults,loop 0 2' >> /etc/fstab" 
    sudo tune2fs -c 0 -i 0 -m 0 vicepa.img
    
  2. Now we mount it
    sudo mkdir -p /vicepa
    sudo mount /vicepa
    

    To add more disks, see Adding More disks

Firewall settings

  1. Need to poke holes in the firewall also (http://security.fnal.gov/cookbook/KerberosPorts.html)
    1. Login to the firewall bridge.objid.net
    2. Open these ports in /etc/shorewall/rules
      ## AFS and kerberos 
      ## From http://security.fnal.gov/cookbook/KerberosPorts.html
      ACCEPT all    net:130.208.209.37-130.208.209.40 tcp,udp 88 #krb
      ACCEPT all    net:130.208.209.37-130.208.209.40 tcp 749
      ACCEPT all    net:130.208.209.37-130.208.209.40 tcp,udp 464
      ACCEPT all    net:130.208.209.37-130.208.209.40 udp 749,4444
      ACCEPT all    net:130.208.209.37-130.208.209.40 udp 9878
      ACCEPT all    net:130.208.209.37-130.208.209.40 udp 7000:7007
      
      

Make the new cell

  1. If you get issues with the case on the afs cell (all UPPERCASE is bad)
    1. sudo dpkg-reconfigure openafs-client
    2. sudo dpkg-reconfigure openafs-fileserver
  2. Make the Cell!
    sudo afs-newcell
    1. Yes, we meet the requirements
      ##Principal: root/admin
    2. If you have problems with the ip address in the CellServDB, make sure it matches in /etc/hosts!
    3. If you see issues about network connections, you probably have an orphan process or something running on that port.
    4. if you have to clear out the user database, it is in /etc/openafs/server/ThisUser

Testing out the cell and making the root volume

  1. Test out kinit and tokens
    sudo su # (We want to switch to the root user)
    
    kinit root/admin
    
    Password for root/admin@SPINLOCK.HR: PASSWORD
    
    aklog
    
    1. if you get errors, it means that weak crypto is not enabled
  2. Check out your tokens with kinit -5f and tokens
  3. Now create the root volume
    afs-rootvol
    ...
    4) The AFS client must be running pointed at the new cell.
    Do you meet these conditions? (y/n) y
    
    You will need to select a server (hostname) and AFS partition on which to
    create the root volumes.
    
    What AFS Server should volumes be placed on? boron.rnd.ru.is
    What partition? [a] a
    
  4. Everything should now be happy!
  5. Add the update server to distribute config files over encrypted channel
    sudo bos create localhost upserveretc simple  "/usr/lib/openafs/upserver  -crypt /etc/openafs" -localauth 
  6. Add the backup server (database servers need this process)
    sudo bos create localhost buserver simple "/usr/lib/openafs/buserver" -localauth

    h3. Standard partitions
  1. Now we create some of the "standard" partitions. This is based upon the MIT configuration. Note that these all start with very small quota! and are their own "class". You will need to schedule backup partitions for them individually. Note that you will need to go to the read-write of the root which is /afs/.rnd.ru.is
    cd /afs/.rnd.ru.is
    vos create afsdb1 a activity
    vos create afsdb1 a course
    vos create afsdb1 a project
    vos create afsdb1 a software
    vos create afsdb1 a system
    vos create afsdb1 a dept
    vos create afsdb1 a org
    vos create afsdb1 a reference
    
  2. Now we mount them in the root area
    fs mkmount activity activity
    fs mkmount course course
    fs mkmount project project
    fs mkmount software software
    fs mkmount system system
    fs mkmount dept dept
    fs mkmount org org
    fs mkmount reference reference
    
  3. Finally, since this is a read-only volume, we have to "release" it
    vos release root.cell
  4. And check to make sure the new directories show up in /afs/rnd.ru.is
    h3. Administrative users
  • Login to the kerberos server as root and create an afsadm user
    kadmin.local
    addprinc addprinc -policy user <user>/afsadm
    quit
  • If you need super-super user capability, you will need to run this command with root/admin
    kinit root/admin
    aklog
    bos adduser afsdb1 <user>/afsadm
    
  • Now you are a super-super user and can make the afs server dance. Sometimes it takes a few minutes for the protection database to update, so you might have to wait. Unfortunately, you are not done, you still need to add them to the group system:administrators so they can do other useful operations.
  • Create an equivalent user to the Kerberos user
    pts createuser <user>.afsadm
  • Add them to the group system:administrators. You may need to wait for the ptserver to sync up
    pts adduser <user>.afsadm system:administrators

Backup partitions

  • Create all backup partitions everywhere!
     vos backupsys 
  • Automate backup of user partitions at 1:00 and the temp partition at 0:01
    bos create afsdb1 backupusers cron -cmd "/usr/bin/vos backupsys -prefix user -localauth" "1:00" 
    bos create afsdb1 backuptemp cron -cmd "/usr/bin/vos backupsys -prefix temp -localauth" "0:01" 
    
  • now go create Oldfiles in the appropriate places
    cd /afs/rnd.ru.is/user/f/fo/foley
    fs mkmount Oldfiles user.foley.backup
    

Adding More disks

http://docs.openafs.org/AdminGuide/ch03s08.html

  1. Get administrator tickets and tokens/var/lib/openafs/local
     kinit <something>/afsadm; aklog
    
  2. format and partition them with ext4 and give them a useful name for later. For example
    1. figure out the next /vicep?? name, in this case it is /vicepb
      fdisk /dev/sdb1
      mkfs.ext4 -L grylavicepb /dev/sdb1
  3. Add the appropriate entry to /etc/fstab
    echo "/dev/sdb1 /vicepb ext4 defaults 0 2" >> /etc/fstab
  4. See if it gets mounted properly
    sudo mount -a 
  5. Kick the fs service to get it to rescan and notice the disk
    bos restart <machine name>  fs
  6. it should now show up on the list of partitions
    vos listpart afsdb1
  7. now you can make new volumes

Multihomed servers

It can be a problem if one of the network interfaces is on a private network (like gryla). You can restrict which network interfaces the server interfaces by putting NetInfo and NetRestrict files in /var/lib/openafs/local

More info at http://docs.openafs.org/AdminGuide/ch03s09.html

Installing another server

These instructions were from examining the afs-newcell script by Sam Hartmans. Make sure you are logged in as root.
Also, make sure that you do not have the hostname as 127.0.0.1 in /etc/hosts!
  1. Stop the client
    service openafs-client stop
  2. Stop the server
    service openafs-fileserver stop
  3. Copy the afs.keytab from a running server
    scp root@lithium.rnd.ru.is:/etc/openafs/server/afs.keytab /etc/openafs/server/afs.keytab
  4. Install the key
    asetkey add 3 /etc/openafs/server/afs.keytab afs/rnd.ru.is
  5. Install the more secure key (see the security advisory)
     cp afs.keytab rxkad.keytab
    # Edit the CellServDB in @/etc/openafs/server/CellServDB@ and append it to the end of @/etc/openafs/CelServDB@ if it isn't already in there.
    # Secure, but possibly database corrupting:
    ## STart up the server
    <pre> service openafs-fileserver start </pre>
    ## Add the admin user through bos
    <pre>bos adduser afscell1 root.admin -localauth</pre>
    ## Create initial protection database by hand.  Bit of a hack
    <pre>pt_util -p /var/lib/openafs/db/prdb.DB0 -w</pre>
    ### start typing in these commands exactly.  Note the space before the last line!
    <pre>root.admin 128/20 1 -204 -204
    system:administrators 130/20 -204 -204 -204
     root.admin 1</pre>
    ## start up the ptserver and vlserver
    <pre>bos create afscell1 ptserver simple /usr/lib/openafs/ptserver -localauth
    bos create afscell1 vlserver simple /usr/lib/openafs/vlserver -localauth
    </pre>
    # Less secure, but more traditional
    ## Shutdown the server
    <pre> service openafs-fileserver stop </pre>
    ## Startup in noauth mode
    <pre>/usr/sbin/bosserver -noauth</pre>
    ## Add the admin user through bos
    <pre>bos adduser localhost root.admin -noauth</pre>
    ## Protection database and vlserver
    <pre>bos create localhost ptserver simple /usr/lib/openafs/ptserver -noauth
    bos create localhost vlserver simple /usr/lib/openafs/vlserver -noauth
    </pre>
    ## Make sure that systems:administrators has root.admin in it
    <pre>pts membership system:administrators -noauth</pre>
    ## Kill the unauthenticated bosserver
    <pre>pkill bosserver</pre>
    ## Start the bos server
    <pre> service openafs-fileserver start </pre>
    # start up the fileserver
    <pre>bos create localhost fs fs -cmd /usr/lib/openafs/fileserver -cmd /usr/lib/openafs/volserver -cmd /usr/lib/openafs/salvager -localauth</pre>
    # setup the backup server
    <pre>sudo bos create localhost buserver simple "/usr/lib/openafs/buserver" -localauth</pre>
    # setup the update clients to make /etc/openafs sync correctly every 2 hours
    <pre>sudo bos create -server localhost -instance upclientetc -type simple -cmd "/usr/lib/openafs/upclient afsdb1.rnd.ru.is -crypt -t 120 /etc/openafs"</pre>
    # Wait for a while (30 sec) for the ubik to sync up
    # Start up the client
    <pre>service openafs-client start</pre>
    # Check to make sure the servers are still running
    <pre>bos status localhost</pre>
    # Add your users (root.admin and foley.afsadm) to /etc/openafs/server/UserList so they can administer the cell
    # You are ready to make volumes in the partition and mount them!
    
    h3. Replication
    
    It is a good idea to replicate files that are changed infrequently, particularly the root.afs and other bottom level directories.  Remember that you will need to only change things in the read-write /afs/.rnd.ru.is then do a <pre>vos release <id></pre>.
    foley has created a script example in @/afs/rnd.ru.is/project/rndnet/Scripts/afs-replicate-vols.sh@
    
    # Create the replications on the various servers
    <pre>vos addsite afsdb2.rnd.ru.is a root.afs
    vos addsite samvinna.rnd.ru.is a root.afs
    </pre>
    # Now setup a bos job to release them on a regular schedule http://docs.openafs.org/AdminGuide/ch04s05.html
    <pre>
    bos create afsdb1.rnd.ru.is releaserootafs cron "/usr/bin/vos release root.afs -local" 0:00
    bos create afsdb1.rnd.ru.is releaserootcell cron "/usr/bin/vos release root.cell -local" 0:00
    </pre>
    
    h2.  Shared keys and multiple Kerberos realms
    
    h3.  Shared keys
    
    Log into the servers and create matching krbtgt principals with matching KVNO.  (Make sure the kvno matches or it will not work.)
    Copies of the magic passwords are on the servers at @/etc/krb5kdc/sharedpw.txt@
    
    Commands for crossrealm between CS.RU.IS and RND.RU.IS
    <pre>kadmin.local  -e "des3-hmac-sha1:normal des-cbc-crc:v4" 
    addprinc -requires_preauth krbtgt/RND.RU.IS@CS.RU.IS
    addprinc -requires_preauth krbtgt/CS.RU.IS@RND.RU.IS
    </pre>
    
    You will then need to modify the @/etc/krb5.conf@ files on the clients.  (or setup DNS)
    <pre>[capath]
    RND.RU.IS = {
              CS.RU.IS = .
    }
    CS.RU.IS = {
             RND.RU.IS = .
    }</pre>
    
    To test it, get keys in one realm and see if you can kvno in the other
    <pre>kinit foley@CS.RU.IS
    kvno foley@RND.RU.IS</pre>
    
    Once it is properly working, you can get service keys (such as AFS) in the other realm.
    <pre>kinit foley@CS.RU.IS
    aklog rnd.ru.is
     klist
    Ticket cache: FILE:/tmp/krb5cc_7812_A14864
    Default principal: foley@CS.RU.IS
    
    Valid starting    Expires           Service principal
    17/04/2013 02:09  18/04/2013 02:09  krbtgt/CS.RU.IS@CS.RU.IS
    17/04/2013 02:09  18/04/2013 02:09  krbtgt/RND.RU.IS@CS.RU.IS
    17/04/2013 02:09  18/04/2013 02:09  afs/rnd.ru.is@RND.RU.IS
    </pre>
    
    h3. Multiple realms
    
    # put a space separated list of the valid kerberos realms to map into @/etc/openafs/server/krb.conf@ then restart.
    ** If you're feeling lazy, login to the DB servers and run:
    <pre>cp /afs/rnd.ru.is/project/rndnet/SVN/Machines/AFSDB1/etc/openafs/server/krb.conf /etc/openafs/server/.
    bos restart localhost -localauth -all
    </pre>
    # then add the afs service keys into the KeyList
    
    h1. CentOs setup
    
    * https://www.sit.auckland.ac.nz/Installing_OpenAFS_on_Red_Hat_distributions_including_Fedora
    
    h2. AFS Client
    
    # add this to /usr/vice/etc/CellServDB
    <pre>>ru.is          #Reykjavik University
    130.208.209.47  #njord.rnd.ru.is
    </pre>
    
    h2. AFS Server 
    
    # sudo yum install -y openafs-server
    # edit /usr/afs/etc/CellServDB
    <pre>ru.is #Reykjavik University
    njord.rnd.ru.is   130.208.209.47
    </pre>
    
    h1. Active Directory
    
    Go here for the [[ActiveDirectory KDC]] setup
    
    h1. Status
    
    * Machines number 1, 3 and 5 on the stack above the catalyst (counted from the top) have ubuntu 10.04 installed and sshd running.
    * root password: 100feet
    * Number 2 and 4 are broken. 2 does not boot and 4 has errors in RAM and does not detect a disk.
    * Network is set up and Ubuntu installed.
    ** gryla (130.208.209.37)  kerberos server, kadmin server, AFSDB server (rnd.ru.is), AFSCELL server (rnd.ru.is a and b)
    ** stekkjastaur (130.208.209.39)
    ** giljagaur (130.208.209.40).
    * No proper DNS for now, so they are in Joe's rnd.objid.net domain (e.g., gryla.rnd.objid.net).
    
    h1. Frequently Asked Questions
    
    h2. I set it up, but the bos commands aren't working and the filserver isn't starting.  The FileLog says something about "Couldn't get CPS for AnyUser" 
    
    Chances are good that your keys aren't properly setup.  https://lists.openafs.org/pipermail/openafs-info/2001-December/002736.html
    Make sure that the kvno is correct for all of the AFS keys installed on all of the servers.

Updated by Joseph Foley about 6 years ago · 8 revisions