Project

General

Profile

AFS Server » History » Revision 7

Revision 6 (Joseph Foley, 2016-05-26 13:52) → Revision 7/8 (Joseph Foley, 2016-05-27 00:10)

h1. AFS Server 


 {{toc}} 

 h2. Tasks 

 h2. Desired Hardware 

 * Drobo (automated raid system with iSCSI) http://www.drobo.com/products/business/b1200i/index.php#!prettyPhoto 

 h2. Instructions 

 * [[rndnet-documentation:AFS_Client_Installation]] 
 * [[How To Add Users]] 

 h3. Guides/links 

 * OpenAFS Official Documentation http://docs.openafs.org/ 
 * OpenAFS Detailed Guide http://techpubs.spinlocksolutions.com/dklar/afs.html 
 * slapd documentation https://help.ubuntu.com/community/OpenLDAPServer 
 * Kerberos/LDAP on Ubuntu (outdated) http://bobcares.com/blog/?p=435 
 * OpenAFS on Ubuntu (outdated) http://bobcares.com/blog/?p=501 
 * IBM AFS 3.6 Documentation http://www-01.ibm.com/software/stormgmt/afs/manuals/Library/unix/en_US/HTML/index.htm 
 * Interrealm AFS http://www.cs.cmu.edu/~help/afs/cross_realm.html 
 * AFS on Linux Presentation http://www.dia.uniroma3.it/~afscon09/docs/wiesand.pdf 
 * mod_waklog (apache integration with AFS) [[mod_waklog]] 
 * Kerberos and DNS http://www.faqs.org/faqs/kerberos-faq/general/section-47.html 
 * Adding another fileserver https://lists.openafs.org/pipermail/openafs-info/2006-September/023495.html 
 * Kerberos and AFS tutorial (Secure endpoints) http://www.secure-endpoints.com/talks/Kerberos_Tutorial_BPW2007.pdf 
 * Object storage: 
 ** http://workshop.openafs.org/afsbpw08/talks/thu_3/OpenAFS+ObjectStorage.pdf 
 ** http://www.dia.uniroma3.it/~afscon09/docs/reuter.pdf 
 * Key security issues 
 ** http://openafs.org/pages/security/OPENAFS-SA-2013-003.txt 
 ** http://www.openafs.org/pages/security/install-rxkad-k5-1.6.txt 

 h3. Getting started 

 * Install Ubuntu 12.04LTS 
 * Install Kerberos KDC (directions at [[Kerberos]]) 
 * Setup DNS autodiscovery 
 <pre> 
 rnd.ru.is. 		 IN 	 AFSDB 	 1 afsdb1.rnd.ru.is. 
 </pre> 

 * Ubuntu 
 ** client configuration is in @/etc/openafs@ 
 ** server configuration is in @/etc/openafs/server@ 
 * CentOS 
 ** client configuration is in @/var/vice/etc@ (the traditional) 
 ** server configuration is in @/usr/afs/etc@ 


 h3. Client 

 *Important!    Do this first!* 

 see [[AFS_Client_Installation]] 

 h3. Server 

 # Packages 
 <pre>sudo apt-get install openafs-krb5 openafs-{fileserver,dbserver}</pre> 

 h3. Keys and accounts 

 # Add entries to @/etc/openafs/CellServDB@ and @/etc/openafs/server/CellServDB@ 
 ## Note that the AFS cell *MUST* be lowercase 
 <pre> 
 >rnd.ru.is 			 # Reykjavik University 
 130.208.209.37 			 #afsdb1.rnd.objid.net  
 130.208.209.39 			 #afsdb2.rnd.objid.net  
 130.208.209.40 			 #afsdb3.rnd.objid.net  
 </pre> 
 # Edit your @/etc/krb5.conf@ 
 <pre>        
 [libdefaults] 
 dns_lookup_kdc = true 
 dns_lookup_realm = true 

 [realms] 
 RND.RU.IS = { 
                 kdc = kerberos.rnd.ru.is 
                 kdc = kerberos-1.rnd.ru.is 
                 kdc = kerberos-2.rnd.ru.is 
                 admin_server = kerberos.rnd.ru.is 
 </pre> 

 # Add principal “afs/rnd.ru.is” and import the key to /etc/openafs/afs.keytab.    This is also a good time to setup the normal keytab.    Replace HOSTNAME with the reverse resolvable hostname.    If this is on greenqloud, you will need to make both the external name and the internal and add it to the keytab. 
 ## WARNING:    If you run the kadmin, it will increment the kerberos version number, which will not allow the new servers to talk to the old.    Instead, copy the afs.keytab using scp to all of the machines! 
 <pre> 
 kadmin.local 
 kadmin: addprinc -policy service -randkey -e des-cbc-crc:normal afs/rnd.ru.is 
 kadmin: ktadd -k /etc/openafs/afs.keytab -e des-cbc-crc:normal afs/rnd.ru.is 
 kadmin: ank -policy host -randkey host/HOSTNAME 
 kadmin: ktadd host/HOSTNAME 
 kadmin: ank -policy host -randkey host/HOSTNAME 
 kadmin: ktadd host/INTERNALHOSTNAME 
 </pre> 
 # Remember the KVNO (key version number) 
 <pre>klist -ke /etc/openafs/afs.keytab</pre> 
 # Import the secret key into the AFS system.    Replace KVNO with the version number 
 <pre>asetkey add KVNO /etc/openafs/afs.keytab afs/rnd.ru.is 
 </pre> 
 # Copy the keytab to use the improved security 
 <pre>cp /etc/openafs/afs.keytab /etc/openafs/server/rxkad.keytab</pre> /etc/openafs/server/rxkad.keytab 
 #Now test with bos (afs-fileserver must be running, possibly restarted) 
 <pre>sudo service openafs-fileserver restart 
 sudo bos listkeys afsdb1 -localauth 
 #key 3 has cksum 2586520638 
 #Keys last changed on Fri Mar 30 02:10:25 2012. 
 #All done. 
 </pre>  
 # Setup kerberized root shell access.    Replace my principal with yours or add it to the list. 
 <pre> 
 sudo vi /root/.krb5login 
 foley@RND.RU.IS 
 </pre> 
 # Test 
 <pre>ksu</pre> 

 h3. Partitions for vice (AFS cell) 

 # Make the partitions in your filesystem (as an image) 
 <pre>cd /home 
 sudo dd if=/dev/zero of=vicepa.img bs=100M count=80     # (8 GB partition) 
 sudo mkfs.ext4 vicepa.img 
 sudo sh -c "echo '/home/vicepa.img /vicepa ext4 defaults,loop 0 2' >> /etc/fstab" 
 sudo tune2fs -c 0 -i 0 -m 0 vicepa.img 
 </pre> 
 # Now we mount it 
 <pre>sudo mkdir -p /vicepa 
 sudo mount /vicepa 
 </pre> 
 To add more disks, see [[AFS Server#Adding-More-disks|Adding More disks]] 


 h3. Firewall settings 

 # Need to poke holes in the firewall also (http://security.fnal.gov/cookbook/KerberosPorts.html) 
 ## Login to the firewall    @bridge.objid.net@ 
 ## Open these ports in @/etc/shorewall/rules@ 
 <pre>## AFS and kerberos  
 ## From http://security.fnal.gov/cookbook/KerberosPorts.html 
 ACCEPT all      net:130.208.209.37-130.208.209.40 tcp,udp 88 #krb 
 ACCEPT all      net:130.208.209.37-130.208.209.40 tcp 749 
 ACCEPT all      net:130.208.209.37-130.208.209.40 tcp,udp 464 
 ACCEPT all      net:130.208.209.37-130.208.209.40 udp 749,4444 
 ACCEPT all      net:130.208.209.37-130.208.209.40 udp 9878 
 ACCEPT all      net:130.208.209.37-130.208.209.40 udp 7000:7007 

 </pre> 

 h3. Make the new cell 

 # If you get issues with the case on the afs cell (all UPPERCASE is bad) 
 ## sudo dpkg-reconfigure openafs-client 
 ## sudo dpkg-reconfigure openafs-fileserver 
 # Make the Cell! 
 <pre>sudo afs-newcell</pre> 
 ## Yes, we meet the requirements 
 ##Principal: root/admin 
 ## If you have problems with the ip address in the CellServDB, make sure it matches in @/etc/hosts@! 
 ## If you see issues about network connections, you probably have an orphan process or something running on that port. 
 ## if you have to clear out the user database, it is in @/etc/openafs/server/ThisUser@ 

 h3. Testing out the cell and making the root volume 

 # Test out kinit and tokens 
 <pre>sudo su # (We want to switch to the root user) 

 kinit root/admin 

 Password for root/admin@SPINLOCK.HR: PASSWORD 

 aklog 
 </pre> 
 ## if you get errors, it means that weak crypto is not enabled 
 # Check out your tokens with @kinit -5f@ and @tokens@ 
 # Now create the root volume 
 <pre>afs-rootvol 
 ... 
 4) The AFS client must be running pointed at the new cell. 
 Do you meet these conditions? (y/n) y 

 You will need to select a server (hostname) and AFS partition on which to 
 create the root volumes. 

 What AFS Server should volumes be placed on? boron.rnd.ru.is 
 What partition? [a] a 
 </pre> 
 # Everything should now be happy! 
 # Add the update server to distribute config files over encrypted channel 
 <pre>sudo bos create localhost upserveretc simple    "/usr/lib/openafs/upserver    -crypt /etc/openafs" -localauth </pre> 
 # Add the backup server (database servers need this process) 
 <pre>sudo bos create localhost buserver simple "/usr/lib/openafs/buserver" -localauth</pre> 
 h3. Standard partitions 

 # Now we create some of the "standard" partitions.    This is based upon the MIT configuration.    Note that these all start with very small quota! and are their own "class".    You will need to schedule backup partitions for them individually.    Note that you will need to go to the read-write of the root which is /afs/.rnd.ru.is 
 <pre> 
 cd /afs/.rnd.ru.is 
 vos create afsdb1 a activity 
 vos create afsdb1 a course 
 vos create afsdb1 a project 
 vos create afsdb1 a software 
 vos create afsdb1 a system 
 vos create afsdb1 a dept 
 vos create afsdb1 a org 
 vos create afsdb1 a reference 
 </pre> 
 # Now we mount them in the root area 
 <pre> 
 fs mkmount activity activity 
 fs mkmount course course 
 fs mkmount project project 
 fs mkmount software software 
 fs mkmount system system 
 fs mkmount dept dept 
 fs mkmount org org 
 fs mkmount reference reference 
 </pre> 
 # Finally, since this is a read-only volume, we have to "release" it 
 <pre>vos release root.cell</pre> 
 # And check to make sure the new directories show up in /afs/rnd.ru.is 
 h3. Administrative users 

 * Login to the kerberos server as root and create an afsadm user 
 <pre>kadmin.local 
 addprinc addprinc -policy user <user>/afsadm 
 quit</pre> 
 * If you need super-super user capability, you will need to run this command with root/admin 
 <pre>kinit root/admin 
 aklog 
 bos adduser afsdb1 <user>/afsadm 
 </pre> 
 * Now you are a super-super user and can make the afs server dance.    Sometimes it takes a few minutes for the protection database to update, so you might have to wait.    Unfortunately, you are not done, you still need to add them to the group system:administrators so they can do other useful operations. 
 * Create an equivalent user to the Kerberos user 
 <pre>pts createuser <user>.afsadm</pre> 
 * Add them to the group system:administrators.    You may need to wait for the ptserver to sync up 
 <pre>pts adduser <user>.afsadm system:administrators</pre> 


 h3. Backup partitions 

 * Create all backup partitions everywhere! 
 <pre> vos backupsys </pre> 
 * Automate backup of user partitions at 1:00 and the temp partition at 0:01 
 <pre>bos create afsdb1 backupusers cron -cmd "/usr/bin/vos backupsys -prefix user -localauth" "1:00" 
 bos create afsdb1 backuptemp cron -cmd "/usr/bin/vos backupsys -prefix temp -localauth" "0:01" 
 </pre> 
 * now go create Oldfiles in the appropriate places 
 <pre>cd /afs/rnd.ru.is/user/f/fo/foley 
 fs mkmount Oldfiles user.foley.backup 
 </pre> 

 h3. Adding More disks 

 http://docs.openafs.org/AdminGuide/ch03s08.html 

 # Get administrator tickets and tokens/var/lib/openafs/local 
 <pre> kinit <something>/afsadm; aklog 
 </pre> 
 # format and partition them with ext4 and give them a useful name for later.    For example 
 ## figure out the next /vicep?? name, in this case it is /vicepb 
 <pre>fdisk /dev/sdb1 
 mkfs.ext4 -L grylavicepb /dev/sdb1</pre> 
 # Add the appropriate entry to /etc/fstab 
 <pre>echo "/dev/sdb1 /vicepb ext4 defaults 0 2" >> /etc/fstab</pre> 
 # See if it gets mounted properly 
 <pre>sudo mount -a </pre> 
 # Kick the fs service to get it to rescan and notice the disk 
 <pre>bos restart <machine name>    fs</pre> 
 # it should now show up    on the list of partitions 
 <pre>vos listpart afsdb1</pre> 
 # now you can make new volumes 

 h3. Multihomed servers 

 It can be a problem if one of the network interfaces is on a private network (like gryla).    You can restrict which network interfaces the server interfaces by putting NetInfo and NetRestrict files in @/var/lib/openafs/local@ 

 More info at http://docs.openafs.org/AdminGuide/ch03s09.html 

 h3. Installing another server 

 These instructions were from examining the afs-newcell script by Sam Hartmans. Make sure you are logged in as root. 
 *Also, make sure that you do not have the hostname as 127.0.0.1 in /etc/hosts!* 
 # Stop the client 
 <pre>service openafs-client stop</pre> 
 # Stop the server 
 <pre>service openafs-fileserver stop</pre> 
 # Copy the afs.keytab from a running server 
 <pre>scp root@lithium.rnd.ru.is:/etc/openafs/server/afs.keytab /etc/openafs/server/afs.keytab</pre> 
 # Install the key 
 <pre>asetkey add 3 /etc/openafs/server/afs.keytab afs/rnd.ru.is</pre> 
 # Install the more secure key (see the security advisory) 
 <pre> cp afs.keytab rxkad.keytab 
 # Edit the CellServDB in @/etc/openafs/server/CellServDB@ and append it to the end of @/etc/openafs/CelServDB@ if it isn't already in there. 
 # Secure, but possibly database corrupting: 
 ## STart up the server 
 <pre> service openafs-fileserver start </pre> 
 ## Add the admin user through bos 
 <pre>bos adduser afscell1 root.admin -localauth</pre> 
 ## Create initial protection database by hand.    Bit of a hack 
 <pre>pt_util -p /var/lib/openafs/db/prdb.DB0 -w</pre> 
 ### start typing in these commands exactly.    Note the space before the last line! 
 <pre>root.admin 128/20 1 -204 -204 
 system:administrators 130/20 -204 -204 -204 
  root.admin 1</pre> 
 ## start up the ptserver and vlserver 
 <pre>bos create afscell1 ptserver simple /usr/lib/openafs/ptserver -localauth 
 bos create afscell1 vlserver simple /usr/lib/openafs/vlserver -localauth 
 </pre> 
 # Less secure, but more traditional 
 ## Shutdown the server 
 <pre> service openafs-fileserver stop </pre> 
 ## Startup in noauth mode 
 <pre>/usr/sbin/bosserver -noauth</pre> 
 ## Add the admin user through bos 
 <pre>bos adduser localhost root.admin -noauth</pre> 
 ## Protection database and vlserver 
 <pre>bos create localhost ptserver simple /usr/lib/openafs/ptserver -noauth 
 bos create localhost vlserver simple /usr/lib/openafs/vlserver -noauth 
 </pre> 
 ## Make sure that systems:administrators has root.admin in it 
 <pre>pts membership system:administrators -noauth</pre> 
 ## Kill the unauthenticated bosserver 
 <pre>pkill bosserver</pre> 
 ## Start the bos server 
 <pre> service openafs-fileserver start </pre> 
 # start up the fileserver 
 <pre>bos create localhost fs fs -cmd /usr/lib/openafs/fileserver -cmd /usr/lib/openafs/volserver -cmd /usr/lib/openafs/salvager -localauth</pre> 
 # setup the backup server 
 <pre>sudo bos create localhost buserver simple "/usr/lib/openafs/buserver" -localauth</pre> 
 # setup the update clients to make /etc/openafs sync correctly every 2 hours 
 <pre>sudo bos create -server localhost -instance upclientetc -type simple -cmd "/usr/lib/openafs/upclient afsdb1.rnd.ru.is -crypt -t 120 /etc/openafs"</pre> 
 # Wait for a while (30 sec) for the ubik to sync up 
 # Start up the client 
 <pre>service openafs-client start</pre> 
 # Check to make sure the servers are still running 
 <pre>bos status localhost</pre> 
 # Add your users (root.admin and foley.afsadm) to /etc/openafs/server/UserList so they can administer the cell 
 # You are ready to make volumes in the partition and mount them! 

 h3. Replication 

 It is a good idea to replicate files that are changed infrequently, particularly the root.afs and other bottom level directories.    Remember that you will need to only change things in the read-write /afs/.rnd.ru.is then do a <pre>vos release <id></pre>. 
 foley has created a script example in @/afs/rnd.ru.is/project/rndnet/Scripts/afs-replicate-vols.sh@ 

 # Create the replications on the various servers 
 <pre>vos addsite afsdb2.rnd.ru.is a root.afs 
 vos addsite samvinna.rnd.ru.is a root.afs 
 </pre> 
 # Now setup a bos job to release them on a regular schedule http://docs.openafs.org/AdminGuide/ch04s05.html 
 <pre> 
 bos create afsdb1.rnd.ru.is releaserootafs cron "/usr/bin/vos release root.afs -local" 0:00 
 bos create afsdb1.rnd.ru.is releaserootcell cron "/usr/bin/vos release root.cell -local" 0:00 
 </pre> 

 h2.    Shared keys and multiple Kerberos realms 

 h3.    Shared keys 

 Log into the servers and create matching krbtgt principals with matching KVNO.    (Make sure the kvno matches or it will not work.) 
 Copies of the magic passwords are on the servers at @/etc/krb5kdc/sharedpw.txt@ 

 Commands for crossrealm between CS.RU.IS and RND.RU.IS 
 <pre>kadmin.local    -e "des3-hmac-sha1:normal des-cbc-crc:v4" 
 addprinc -requires_preauth krbtgt/RND.RU.IS@CS.RU.IS 
 addprinc -requires_preauth krbtgt/CS.RU.IS@RND.RU.IS 
 </pre> 

 You will then need to modify the @/etc/krb5.conf@ files on the clients.    (or setup DNS) 
 <pre>[capath] 
 RND.RU.IS = { 
           CS.RU.IS = . 
 } 
 CS.RU.IS = { 
          RND.RU.IS = . 
 }</pre> 

 To test it, get keys in one realm and see if you can kvno in the other 
 <pre>kinit foley@CS.RU.IS 
 kvno foley@RND.RU.IS</pre> 

 Once it is properly working, you can get service keys (such as AFS) in the other realm. 
 <pre>kinit foley@CS.RU.IS 
 aklog rnd.ru.is 
  klist 
 Ticket cache: FILE:/tmp/krb5cc_7812_A14864 
 Default principal: foley@CS.RU.IS 

 Valid starting      Expires             Service principal 
 17/04/2013 02:09    18/04/2013 02:09    krbtgt/CS.RU.IS@CS.RU.IS 
 17/04/2013 02:09    18/04/2013 02:09    krbtgt/RND.RU.IS@CS.RU.IS 
 17/04/2013 02:09    18/04/2013 02:09    afs/rnd.ru.is@RND.RU.IS 
 </pre> 


 h3. Multiple realms 

 # put a space separated list of the valid kerberos realms to map into @/etc/openafs/server/krb.conf@ then restart. 
 ** If you're feeling lazy, login to the DB servers and run: 
 <pre>cp /afs/rnd.ru.is/project/rndnet/SVN/Machines/AFSDB1/etc/openafs/server/krb.conf /etc/openafs/server/. 
 bos restart localhost -localauth -all 
 </pre> 
 # then add the afs service keys into the KeyList 


 h1. CentOs setup 

 * https://www.sit.auckland.ac.nz/Installing_OpenAFS_on_Red_Hat_distributions_including_Fedora 

 h2. AFS Client 

 # add this to /usr/vice/etc/CellServDB 
 <pre>>ru.is            #Reykjavik University 
 130.208.209.47    #njord.rnd.ru.is 
 </pre> 

 h2. AFS Server  

 # sudo yum install -y openafs-server 
 # edit /usr/afs/etc/CellServDB 
 <pre>ru.is #Reykjavik University 
 njord.rnd.ru.is     130.208.209.47 
 </pre> 

 h1. Active Directory 

 Go here for the [[ActiveDirectory KDC]] setup 

 h1. Status 

 * Machines number 1, 3 and 5 on the stack above the catalyst (counted from the top) have ubuntu 10.04 installed and sshd running. 
 * root password: 100feet 
 * Number 2 and 4 are broken. 2 does not boot and 4 has errors in RAM and does not detect a disk. 
 * Network is set up and Ubuntu installed. 
 ** gryla (130.208.209.37)    kerberos server, kadmin server, AFSDB server (rnd.ru.is), AFSCELL server (rnd.ru.is a and b) 
 ** stekkjastaur (130.208.209.39) 
 ** giljagaur (130.208.209.40). 
 * No proper DNS for now, so they are in Joe's rnd.objid.net domain (e.g., gryla.rnd.objid.net). 

 h1. Frequently Asked Questions 

 h2. I set it up, but the bos commands aren't working and the filserver isn't starting.    The FileLog says something about "Couldn't get CPS for AnyUser" 

 Chances are good that your keys aren't properly setup.    https://lists.openafs.org/pipermail/openafs-info/2001-December/002736.html 
 Make sure that the kvno is correct for all of the AFS keys installed on all of the servers.