Ubuntu Adding Samba Users and Groups

I will be adding users to both my Ubuntu Server, then create their samba user counterpart.

First we need to add a group for our users to join.
sudo groupadd smbusers

Then we need to use the useradd command to add new users to the group we just created (or any existing group); and add the user to the system. The syntax is "useradd -G {group-name} username"
sudo useradd -G smbusers username
sudo passwd username (optional)
sudo smbpasswd -a username

If you already have a user that is created and want to add them to a group, you can do this.
sudo usermod -a -G smbusers username


Ubuntu Server 10.04 LTS NIC Bonding (updated for 12.04)

First let's install ifenslave, this is required to bond the nic's.
sudo apt-get install ifenslave

Then we need to edit /etc/network/interfaces
sudo nano /etc/network/interfaces

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
#auto eth0
#iface eth0 inet dhcp

# This section works for 10.04
#auto bond0
#iface bond0 inet static
#address xxx.xxx.xxx.xxx
#gateway xxx.xxx.xxx.xxx
#netmask xxx.xxx.xxx.xxx
#bond-slaves eth0 eth1
# LACP confuration
#bond_mode 802.3ad
#bond_miimon 100
#bond_lacp_rate 1

# This section works for 12.04
auto eth0
 iface eth0 inet manual
   bond-master bond0

auto eth1
 iface eth1 inet manual
   bond-master bond0

auto bond0
iface bond0 inet static
address xxx.xxx.xxx.xxx
gateway xxx.xxx.xxx.xxx
netmask xxx.xxx.xxx.xxx
# LACP confuration
bond-mode 802.3ad
bond-miimon 100
bond-lacp_rate 1
bond-slaves none
*NOTE* If you only have 2 NIC's and want them bonded comment out your eth0 lines

Next update your resolv.conf so you have DNS; edit /etc/resolvconf/resolv.conf.d/head
sudo nano /etc/resolvconf/resolv.conf.d/head
nameserver xxx.xxx.xxx.xxx
nameserver xxx.xxx.xxx.xxx
search domain.name

Restart network services
sudo /etc/init.d/networking restart

*NOTE* Here is info about different bonding modes:
mode=0 (balance-rr) Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.

mode=1 (active-backup) Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond’s MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance. The primary option affects the behavior of this mode.

mode=2 (balance-xor) XOR policy: Transmit based on [(source MAC address XOR'd with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.

mode=3 (broadcast) Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.

mode=4 (802.3ad) IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.

* Pre-requisites:
* Ethtool support in the base drivers for retrieving the speed and duplex of each slave.
* A switch that supports IEEE 802.3ad Dynamic link aggregation. Most switches will require some type of configuration to enable 802.3ad mode.

mode=5 (balance-tlb) Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.

* Prerequisite: Ethtool support in the base drivers for retrieving the speed of each slave.

mode=6 (balance-alb) Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.


Ubuntu Server 10.04 LTS RocketRaid 26xx DKMS Driver

First, install the build tools:
sudo apt-get install build-essential

Download the driver source from Highpoint:
lwp-download http://www.highpoint-tech.com/BIOS_Driver/rr26xx/2640X1-2640X4-2642/Linux/rr264x-linux-src-v1.3-legacy_single-101203-0910.tar.gz

Extract the tar:
tar -zxvf rr264x-linux-src-v1.3-legacy_single-101203-0910.tar.gz
mv rr2640-linux-src-v1.3-legacy_single rr26xx-linux-src-v1.3
cd rr26xx-linux-src-v1.3

Make sure DKMS is installed:
sudo aptitude install dkms

Create the DKMS configuration file:
cat >dkms.conf

Cut and Paste the following text:
POST_BUILD="do_Module.symvers rr26xx save $dkms_tree/$module/$module_version/build/Module.symvers"
Hit "Control-D" (as in Dog) to close the file.

Copy the Makefile and Config files up to the top directory:
cp product/rr2640/linuxls/* .
mv Makefile Makefile_orig

Modify the HPT_ROOT in the Makefile:
sed 's/HPT_ROOT := ..\/..\/../HPT_ROOT := \/var\/lib\/dkms\/rr26xx\/1.3\/build/' Makefile_orig >Makefile

Move the source into /usr/src for DKMS:
cd ..
sudo mv rr26xx-linux-src-v1.3 /usr/src/rr26xx-1.3

Trick dkms into looking in the /usr/src directory for the precompiled RocketRaid object code:
mv /usr/src/rr26xx-1.3/lib /usr/src/rr26xx-1.3/real_lib
ln -s /usr/src/rr26xx-1.3/real_lib /usr/src/rr26xx-1.3/lib
*NOTE* For some reason dkms will not copy object files (.o) when doing builds, so we need to fool it.
*NOTE* Above symbolic link must be complete (absolute) path!

Add the source to DKMS:
sudo dkms add -m rr26xx -v 1.3

Build the module:
sudo dkms build -k `uname -r` -m rr26xx -v 1.3
*NOTE* This should hopefully compile correctly. If not, you can view the log output from the file "/var/lib/dkms/rr26xx/1.3/build/make.log"

Next install the module:
sudo dkms install -k `uname -r` -m rr26xx -v 1.3

Finally, create the boot image:
sudo mkinitramfs -o /boot/initrd.img-`uname -r` `uname -r`

*UPDATE* I just verified that this will make it through a kernel update. I just updated my server to 2.6.32-37-server and rebooted, my card was detected, and the raid array was intact.

This is adapted from https://help.ubuntu.com/community/RocketRaid#RocketRaid_26xx_Driver
with support from this forum http://ubuntuforums.org/showthread.php?t=1633597

Mythbuntu: Antec Fusion v1 VFD & MCE Remote (Updated for 10.10)

This is an updated how-to of my notes from an earlier post. Things have changed since Ubuntu 9.04, I am now on 10.10 due to the inclusion of Trim support. The mceusb remote should just work out of the box. Here are some notes on it. So basically this is to get your Antec Fusion v1 VFD working.

Mythbuntu: MythTV/XBMC Switching

This script allows switching between mythtv frontend and xbmc. After you have lirc and irexec installed and running.


To stop Mythfrontend from automatically starting on reboot, this is how I removed it.
cd .config/autostart
rm mythtv.desktop

Ubuntu Server 10.04 LTS Allowing Symlinks with Samba

Apparently in 10.04 you need to add a few lines to allow symlinks in your samba share. I kept getting access denied in Windows.

Under Global, add the following 3 lines:
follow symlinks = yes
wide links = yes
unix extensions = no

Now restart Samba
sudo /etc/init.d/smbd restart

You should be good to go!

Ubuntu 10.04 LTS bootable software RAID-1

Start from your home directory
cd ~

Install mdadm
sudo apt-get install mdadm

You need 2 modules loaded, md and raid1 Ubuntu 10.04 should automatically have md loaded. You can verify this by running "fgrep CONFIG_MD /boot/config-$(uname -r)"

We need to load the raid1 module so when you reboot it will load.
sudo echo raid1 >> /etc/modules

Now we will load it manually so we don't have to reboot
sudo modprobe raid1

Verify its loaded
lsmod | grep raid1

Next we are going to copy the partition info from sda to sdb.
sudo sfdisk -d /dev/sda > sda.out
sudo sfdisk -f /dev/sdb < sda.out
*NOTE* If you get "I don't like these partitions – nothing changed." Verify it by comparing the output of sfdisk -l /dev/sda and sfdisk -l /dev/sdb.

Change the partition type of the /dev/sdb Linux partition(s) to "Linux raid autodetect"
sudo sfdisk --change-id /dev/sdb 1 fd

Now we're ready to create the array. We specify a RAID 1 array with 2 devices. The first drive is missing (we add it later) and the second is /dev/sdb1:
sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1

Now, we need to create and update the mdadm.conf.
sudo cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.bak && sudo cp /etc/mdadm/mdadm.conf mdadm.conf && sudo mdadm --detail --scan >> mdadm.conf && sudo cp mdadm.conf /etc/mdadm/mdadm.conf
*NOTE* It will put in a parameter metadata=00.90 that will cause warnings later. Apparently, this is a bug and it is safe to remove it.

Now, we format the raid volume
sudo mkfs -t ext4 /dev/md0

Find out the UUID of the array
sudo blkid

Make a copy of /etc/fstab in your home directory and change root to mount to the UUID of the array.  Don't change the real /etc/fstab quite yet.
sudo cp /etc/fstab /etc/fstab.bak && sudo cp /etc/fstab fstab && sudo nano fstab

Really quick, check what kernel you are running you will need this for the next section.
uname -r

Next is adding a custom GRUB2 setup
sudo cp /etc/grub.d/40_custom 09_swraid1_setup && sudo nano 09_swraid1_setup

Replace "2.6.32-36-server" with your kernel version you got from running "uname -r"
menuentry 'Ubuntu, with Linux 2.6.32-36-server' --class ubuntu --class gnu-linux --class gnu --class os {
        insmod raid
        insmod mdraid
        insmod ext2
        set root='(md0)'
        linux   /boot/vmlinuz-2.6.32-36-server root=/dev/md0 ro   quiet
        initrd  /boot/initrd.img-2.6.32-36-server

Now copy both files we just created to their respective locations
sudo cp fstab /etc/ && sudo cp 09_swraid1_setup /etc/grub.d/

Let's update Grub
sudo update-grub && sudo update-initramfs -u

And make sure Grub is installed on both drives
sudo grub-install /dev/sda && sudo grub-install /dev/sdb

Create a directory called "tmpraid"
sudo mkdir /tmpraid

Mount the array on /tmparray
sudo mount /dev/md0 /tmpraid && sudo rsync -vaxP / /tmpraid

sudo reboot

Change the partition type of sda now
sudo sfdisk --change-id /dev/sda 1 fd

Add it to the array:
sudo mdadm --add /dev/md0 /dev/sda1

Let's watch /dev/sda sync:
watch cat /proc/mdstat

Let's delete the GRUB entry we created earlier since it is no longer needed
sudo rm -f /etc/grub.d/09_swraid1_setup

Now let's finish up
sudo update-grub && sudo update-initramfs -u

Now we're done, reboot and test if you so wish.


Disable/Enable Windows IPv6 Tunnels

I just found this link, and wanted to preserve it. Very nice for disabling IPv6 tunnels on Windows machines. I am running dual-stack, and the tunnels felt like they were in the way.


Windows Disable IPv6 RA solicitations

This works for both Windows 7 and Server 2008, I use this if I have set a static IPv6 address and want to stop the Windows box from having more than one IPv6 address.

First we want to find the index value of your nic.

c:\netsh int ipv6 sh int
Idx     Met         MTU          State                Name
---  ----------  ----------  ------------  ---------------------------
  1          50  4294967295  connected     Loopback Pseudo-Interface 1
 11          10        1500  connected     Local Area Connection

Now apply the value to disable the RA solicitation

c:\netsh int ipv6 set int 11 routerdiscovery=disabled

Now after a reboot you should only have one IPv6 IP address.