Ubuntu Adding Samba Users and Groups

I will be adding users to both my Ubuntu Server, then create their samba user counterpart.

First we need to add a group for our users to join.
sudo groupadd smbusers

Then we need to use the useradd command to add new users to the group we just created (or any existing group); and add the user to the system. The syntax is "useradd -G {group-name} username"
sudo useradd -G smbusers username
sudo passwd username (optional)
sudo smbpasswd -a username

If you already have a user that is created and want to add them to a group, you can do this.
sudo usermod -a -G smbusers username


Ubuntu Server 10.04 LTS NIC Bonding (updated for 12.04)

First let's install ifenslave, this is required to bond the nic's.
sudo apt-get install ifenslave

Then we need to edit /etc/network/interfaces
sudo nano /etc/network/interfaces

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
#auto eth0
#iface eth0 inet dhcp

# This section works for 10.04
#auto bond0
#iface bond0 inet static
#address xxx.xxx.xxx.xxx
#gateway xxx.xxx.xxx.xxx
#netmask xxx.xxx.xxx.xxx
#bond-slaves eth0 eth1
# LACP confuration
#bond_mode 802.3ad
#bond_miimon 100
#bond_lacp_rate 1

# This section works for 12.04
auto eth0
 iface eth0 inet manual
   bond-master bond0

auto eth1
 iface eth1 inet manual
   bond-master bond0

auto bond0
iface bond0 inet static
address xxx.xxx.xxx.xxx
gateway xxx.xxx.xxx.xxx
netmask xxx.xxx.xxx.xxx
# LACP confuration
bond-mode 802.3ad
bond-miimon 100
bond-lacp_rate 1
bond-slaves none
*NOTE* If you only have 2 NIC's and want them bonded comment out your eth0 lines

Next update your resolv.conf so you have DNS; edit /etc/resolvconf/resolv.conf.d/head
sudo nano /etc/resolvconf/resolv.conf.d/head
nameserver xxx.xxx.xxx.xxx
nameserver xxx.xxx.xxx.xxx
search domain.name

Restart network services
sudo /etc/init.d/networking restart

*NOTE* Here is info about different bonding modes:
mode=0 (balance-rr) Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.

mode=1 (active-backup) Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond’s MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance. The primary option affects the behavior of this mode.

mode=2 (balance-xor) XOR policy: Transmit based on [(source MAC address XOR'd with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.

mode=3 (broadcast) Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.

mode=4 (802.3ad) IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.

* Pre-requisites:
* Ethtool support in the base drivers for retrieving the speed and duplex of each slave.
* A switch that supports IEEE 802.3ad Dynamic link aggregation. Most switches will require some type of configuration to enable 802.3ad mode.

mode=5 (balance-tlb) Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.

* Prerequisite: Ethtool support in the base drivers for retrieving the speed of each slave.

mode=6 (balance-alb) Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.


Ubuntu Server 10.04 LTS RocketRaid 26xx DKMS Driver

First, install the build tools:
sudo apt-get install build-essential

Download the driver source from Highpoint:
lwp-download http://www.highpoint-tech.com/BIOS_Driver/rr26xx/2640X1-2640X4-2642/Linux/rr264x-linux-src-v1.3-legacy_single-101203-0910.tar.gz

Extract the tar:
tar -zxvf rr264x-linux-src-v1.3-legacy_single-101203-0910.tar.gz
mv rr2640-linux-src-v1.3-legacy_single rr26xx-linux-src-v1.3
cd rr26xx-linux-src-v1.3

Make sure DKMS is installed:
sudo aptitude install dkms

Create the DKMS configuration file:
cat >dkms.conf

Cut and Paste the following text:
POST_BUILD="do_Module.symvers rr26xx save $dkms_tree/$module/$module_version/build/Module.symvers"
Hit "Control-D" (as in Dog) to close the file.

Copy the Makefile and Config files up to the top directory:
cp product/rr2640/linuxls/* .
mv Makefile Makefile_orig

Modify the HPT_ROOT in the Makefile:
sed 's/HPT_ROOT := ..\/..\/../HPT_ROOT := \/var\/lib\/dkms\/rr26xx\/1.3\/build/' Makefile_orig >Makefile

Move the source into /usr/src for DKMS:
cd ..
sudo mv rr26xx-linux-src-v1.3 /usr/src/rr26xx-1.3

Trick dkms into looking in the /usr/src directory for the precompiled RocketRaid object code:
mv /usr/src/rr26xx-1.3/lib /usr/src/rr26xx-1.3/real_lib
ln -s /usr/src/rr26xx-1.3/real_lib /usr/src/rr26xx-1.3/lib
*NOTE* For some reason dkms will not copy object files (.o) when doing builds, so we need to fool it.
*NOTE* Above symbolic link must be complete (absolute) path!

Add the source to DKMS:
sudo dkms add -m rr26xx -v 1.3

Build the module:
sudo dkms build -k `uname -r` -m rr26xx -v 1.3
*NOTE* This should hopefully compile correctly. If not, you can view the log output from the file "/var/lib/dkms/rr26xx/1.3/build/make.log"

Next install the module:
sudo dkms install -k `uname -r` -m rr26xx -v 1.3

Finally, create the boot image:
sudo mkinitramfs -o /boot/initrd.img-`uname -r` `uname -r`

*UPDATE* I just verified that this will make it through a kernel update. I just updated my server to 2.6.32-37-server and rebooted, my card was detected, and the raid array was intact.

This is adapted from https://help.ubuntu.com/community/RocketRaid#RocketRaid_26xx_Driver
with support from this forum http://ubuntuforums.org/showthread.php?t=1633597