LTSPManual

From LTSPedia
Jump to: navigation, search

Contents

The boot process of a thin client

                       Load the Linux kernel into the memory of the thin
                       client. This can be done several different ways,
                       including:
                   


                       Each of the above booting methods will be explained
                       later in this chapter. But for now, it should be noted
                       that it makes sense in almost all cases to use a PXE
                       based network card during booting  for the fastest,
                       and easiest to setup method.
                   


                       Once the kernel has been loaded into memory, it will
                       begin executing.
                   


                       The kernel will initialize the entire system and all
                       of the peripherals that it recognizes.
                   


                       This is where the fun really begins. During the
                       kernel loading process, an initramfs image will also
                       be loaded into memory.
                   


                       Normally, when the kernel is finished booting, it will
                       launch the new task launcher `upstart`,
                       which will handle starting up a server or workstation.
                       But, in this case, we've instructed the kernel to load
                       a small shell script instead. This shell script is
                       called `/init`,and lives in the root
                       of the initramfs.
                   


                       The `/init` script begins by mounting
                        and
                       , starts `udev` to
                       discover and initialize hardware, especially the
                       network card, which is needed for every aspect of the
                       boot from here on. As well, it creates a small ram
                       disk, where any local storage that is needed (to
                       configure the  file, for
                       instance) can be written to.
                   


                       The  network interface is
                       configured. This is the networking interface that has
                        as its IP address.
                   


                       A small DHCP client called `ipconfig`
                       will then be run, to make another query from the DHCP
                       server. This separate user-space query gets
                       information supplied in the dhcpd.conf file, like the
                       nfs root server, default gateway, and other important
                       parameters.
                   


                       When `ipconfig` gets a reply from the
                       server, the information it receives is used to
                       configure the Ethernet interface, and determine the
                       server to mount the root from.
                   


                       Up to this point, the root filesystem has been a ram
                       disk. Now, the `/init` script will
                       mount a new root filesystem via either NBD or NFS. In
                       the case of NBD, the image that is normally loaded is
                       .
                       If the root is mounted via NFS, then the directory
                       that is exported from the server is typically
                       .
                       It can't just mount the new filesystem as
                       . It must
                       first mount it to a separate directory. Then, it will
                       do a `run-init`, which will swap the
                       current root filesystem for a new filesystem. When it
                       completes, the filesystem will be mounted on
                       . At
                       this point, any directories that need to be writable
                       for regular start up to occur, like
                       , or , are
                       mounted at this time.
                   


                       Once the mounting of the new root filesystem is
                       complete, we are done with the `/init` shell script and
                       we need to invoke the real `/sbin/init` program.
                   


                       The `init` program will read the
                       
                       directory and begin setting up the thin client
                       environment. From there, upstart will begin reading
                       the start up commands in .
                   


                       It will execute the `ltsp-client-setup` command
                       which will configure many aspects of the thin client environment,
                       such as checking if local devices need starting, loading any
                       specified modules, etc.
                   


                       Next, the `init` program will begin
                       to execute commands in the  directory
                   


                       One of the items in the 
                       directory is the `ltsp-client-core`
                       command that will be run while the thin client is
                       booting.
                   


                       The  file will be parsed,
                       and all of the parameters in that file that pertain to
                       this thin client will be set as environment variables
                       for the `ltsp-client-core` script
                       to use.
                   


                       If Sound is configured at this point, the
                       `pulseaudio`
                       daemon is started, to allow remote audio connections
                       from the server to connect and play on the thin
                       client.
                   


                       If the thin client has local device support enabled,
                       the `ltspfsd` program is started to
                       allow the server to read from devices such as memory
                       sticks or CD-Roms attached to the thin client.
                   


                       At this point, any of the screen sessions you've
                       defined in your  will be
                       executed.
                   


                       Screen sessions are what you want to launch on all
                       of the virtual screens on your terminal. These are the
                       standard virtual screens that all GNU/Linux
                       distributions usually have, i.e.
                       , through
                       .
                   


                       By default, a standard character based getty will be
                       run on screen 1 ( in the  file).
                   


                       As well, if nothing else is specified in the
                       
                       file, an `ldm`ldm; screen script is run
                       on . The LTSP Display
                       Manager (`ldm`ldm;)
                       is the default login manager for LTSP.
                   


                       If  is set to a value of
                       `ldm`ldm;, or `startx`, then the X
                       Windows System will be launched, giving you a graphical user interface.
                   


                       By default, the Xorg server will auto-probe the card,
                       create a default  file on the
                       ram-disk in the terminal, and start up xorg with that custom config.
                   


                       The X server will either start an encrypted `ssh`
                       tunnel to the server, in the case of `ldm`ldm;, or an an
                       XDMCP query to the LTSP server, in the case of
                       `startx`. Either way,
                       a login box will appear on the terminal.
                   


                       At this point, the user can log in. They'll get a
                       session on the server.
                   


                       This confuses a lot of people at first. They are
                       sitting at a thin client, but they are running a
                       session on the server. All commands they run will be
                       run on the server, but the output will be displayed on
                       the thin client.
                   


Installation

               With the integration of LTSP into distributions, installation
               of LTSP is now usually as easy as adding the LTSP packages in
               your distro's package manager.  Consult your distribution's
               documentation for details on how to install LTSP on your
               particular system.
           


               However, as a general guideline, usually after you've installed
               your distributions' LTSP packages, and configured your network
               interfaces, and some kind of DHCP3 server, you'd run (as root):
           


               If you are on a 64-bit system but your clients
               have another architecture use the --arch option
               e.g. 
           


               After that, you should be able to boot your first thin
               client.
           



General thin client parameters

           There are several variables that one can define in the lts.conf
           file which control how the thin client interacts with the server.
           These are:
       

Modules and startup scripts

           For the most part, LTSP does a very good job of detecting what
           hardware's on your thin client. However, it's possible that you
           may want to manually specify a kernel module to load after boot.
           Alternatively, you may have a script you've written that you've
           put in the chroot, and want to make sure gets run at startup. LTSP
           provides some hooks to allow you to do this.
       

Sound in LTSP

           Sound in LTSP is handled by running the
           `pulseaudio` daemon on the
           thin client, which sits on top of the ALSA kernel drivers.  The
           thin client's kernel should detect the thin client sound hardware
           via the usual udev mechanisms, and enable the sound card.  At boot
           time, the `pulseaudio` daemon is run, which allows the thin client to
           receive audio streams via network connections.
       


           On login, the LDM sets both the  and
           
           environment variables for the X windows session, to allow the
           server to re-route the sound over a TCP/IP socket to the thin
           client.
       

X-Windows parameters

           Setting up X windows on the thin client's normally a pretty easy
           operation. The thin client uses X.org's own auto configuration
           mode to let X determine what it thinks is installed in the box.
       


           However, sometimes, this doesn't always work. Either due to
           strange/buggy hardware, or buggy drivers in X.org, or because X
           detects default settings that you don't want. For instance, it may
           detect that your monitor is capable of doing 1280x1024, but you'd
           prefer it to come up in 1024x768 resolution. Fortunately, you can
           tweak individual X settings, or, alternatively, simply provide
           your own  to use.
       

X.org Configuration


XRANDR setting for managing displays

           The new Xorg Xserver has the ability to figure out (for the most part,
           to the extent that the driver helps in the process) the best mode for
           the videocard.  Moreover, with the new dependency upon hal and Xrandr,
           it is recommended to add input devices with hal and modify video modes
           with Xrandr 1.2 calls.  In essence, the xorg.conf becomes a place really
           to fix deficiencies in poorly written drivers or to force certain
           abnormal driver behavior in a particular environment in a way that can
           not be otherwise done through hal or Xrandr.
       

New Xorg structure within LTSP

               To accommodate this, Xorg now understands partial xorg.conf files.
               Meaning you only add the sections that you need to force.  Otherwise, it
               discovers everything.  That's why you might see minimalist xorg.conf
               files in your LTSP chroot.
           


               The  directory (located in the
               chroot's ) is a structure of shell scripts all
               of which are sourced in order (similar to
                or  that you
               may be familiar with).  These scripts are executed upon the beginning of
               each session but before the Xserver (if the session runs an Xserver) is
               launched.  You can make whatever script you want that may need to run at
               that point.  For LTSP, one thing we use it for is to set up
                the
               Xserver will be launched.  This entails not just generating a
               
               file as needed, but also configuring the parameters that the Xserver
               should be launched with.  The nice thing about a collection of sourced
               scripts is that it gives flexibility to the distribution or to the
               administrator to add additional scripts that may be required for that
               distribution or for a particular network environment that will not
               modify existing files (and therefore require more maintenance to care
               for updates in the upstream code).
           

Script structure

               Each script is named with a prefix letter, then an order number, then a
               name.  The prefix letter determines when the scripts of that prefix are
               executed and the order number determines in what order.
           


               PREFIXES:
               Prefixes that may be used include:
           


               S - Is a script that runs at the beginning of a session (screen script)
               K - Is a script that runs at the end of a session (screen script)
               XS - Is a script that is only run at the beginning of screen scripts
               that run an Xserver
           


               All of the scripts that generate a xorg.conf or modify the Xserver
               arguments are XS* scripts.
           


               These scripts are mostly organized by the particular lts.conf parameter
               or function that they affect.  For example, XS85-xvideoram adds the
               ability to specify the X_VIDEO_RAM parameter in lts.conf and force the
               amount of video ram used by the driver.
           


               If you are going to create your own script, I recommend looking at other
               scripts to understand the structure.  Since many hacks may impact the
               same xorg.conf sections, each section has a function of hacks assigned
               to it, and in your script, you would create a function and add it to the
               list of functions for that section.  For example, if you add something
               to the Monitor section (that cannot already be added through existing
               functions) you would create a function in your script and add it to the
               monitor_hacks function list.  Again, easier to read the code and look at
               examples to understand how to write a new script.
           


               Also, please note that one of the lts.conf parameters you can specify
               is:  
               This should be set to a path to a script.  So, if you have the old
               configure-x.sh and like it better, simply copy it into the chroot, to
               say,
               `/opt/ltsp/<arch>/usr/share/ltsp/configure-x.sh`
               and then in ,
               specify:    and
               you will be back to where you were.
           

XRandR parameters


Touchscreen configuration

           Description to be added later.
       

Local Applications

           Description to be added later.
       

The LDM display manager


Introduction

               The LTSP Display Manager, or `ldm`ldm; is the
               display manager specifically written by the LTSP project to
               handle logins to a GNU/Linux server. It is the default display
               manager for LTSP thin clients running under LTSP, and has a
               lot of useful features:
           


                       It is written in C, for speed and efficiency on low
                       end clients.
                   


                       It supports logging in via either a greeter (a
                       graphical login application) or autologin.
                   


                       It can be configured to encrypt X Windows traffic,
                       for increased security, or leave it unencrypted, for
                       better performance on slower clients.
                   


                       It contains a simple load-balancing system, to allow
                       the system administrator to allow load balancing
                       across several servers.
                   


               We'll go over the  entries you'll
               need to control these features below.
           

Theory of operation

               To help understand the following sections, a bit of an
               explanation of how `ldm`ldm; does it's work is
               needed. Most thin client display managers tend to run up on
               the server. The `ldm`ldm; display manager is
               unique in that it runs on the thin client itself. This allows
               the thin client to have a lot of choice as to how it will set
               up the connection. A typical login session goes as follows:
           


                       `ldm`ldm; launches and starts up the X
                       Windows display on the thin client.
                   


                       `ldm`ldm; starts up the greeter, which is
                       a graphical program which presents the user with a
                       nice login display and allows them to select their
                       session, language, and host they'd like to log into.
                   


                       `ldm`ldm; collects the information from
                       the greeter, and starts an ssh session with the
                       server. This ssh connection is used to create an ssh
                       master socket, which is used by all subsequent
                       operations.
                   


                       Now, the users selected session is started via the
                       master socket. Depending on whether or not an
                       encrypted connection has been requested, via the
                        parameter, the session is either connected
                       back to the local display via the ssh tunnel, or via a
                       regular TCP/IP connection.
                   


                       During the session, any memory sticks, or other
                       local devices that are plugged in, communicate their
                       status to the server via the ssh control socket.
                   


                       When the user exits the session, the ssh connection is
                       closed down, the X server is stopped, and `ldm`ldm;
                       restarts itself, so everything starts with a clean slate.
                   

Encrypted versus unencrypted sessions

               By default, LTSP5 encrypts the X session between the server.
               This makes your session more secure, but at the cost of
               increased processing power required on the thin client and on
               the server. If processing power is a concern to you, it's very
               easy to specify that the connection for either an individual
               workstation, or the default setting should use an unencrypted
               connection. To do so, simply specify:
           


               in your  file in the appropriate
               section.
           



Load balancing features

               In this version of LTSP, there's a simple load-balancing
               solution implemented that allows administrators to have
               multiple LTSP servers on the network, and allow the thin
               client to pick which one of the servers it would like to log
               into.
           


               The host selection system is simple and flexible enough to
               allow administrators to implement their own policy on how they
               want the load balancing to happen: either on a random,
               load-based, or round robin system. See for details.
           

RC script capabilities

               LDM has a very good system for handling user-supplied rc.d
               scripts.  This allows people looking to add site-specific
               customizations to their LTSP setups an easy way to integrate
               this functionality into LTSP.
           


               These rc.d scripts can be placed in
               .  They are
               executed in the usual rc.d type method, so you must make sure
               that any script you write will not make a call to
               `exit`.
           


               The files start with the letter I, S, K, or X, and have two
               digits after them, allowing you to place them in order of
               execution.  The letters stand for:
           


                        scripts are executed at the
                       start of LDM, before the greeter has been presented.
                   


                        scripts are executed after the user
                       has logged in, but before the X session is run.
                   


                        scripts are executed while the X
                       session is being executed.
                   


                        scripts are executed after the X
                       session has ended, but before the user logs out entirely.
                   


               Your scripts can make use of the following environment
               variables in the S, X, and K scripts:
           


                           This is the username the user supplied at login.
                       


                           The path to the ssh control socket that LDM has open
                           for communication with the server.
                       


                           The current server that LDM is connected to.
                       


                           The IP address of the thin client.
                       


               You can use these variables to create scripts that customize
               behaviors at login time.  For instance, lets say you were
               running the GNOME desktop environment, and wanted to force your
               users to have blank-only mode for their screen savers, to save
               network bandwidth.
           


               Since the script is actually running , you want this script to set this
               up on the server, where the Gnome session is running.  That's
               where you can make use of the  and
                environment variables to run an
               `ssh` command on the server, using the control
               socket that LDM has set up.  Here's an example script.  You
               could install this into
               :
           


               Using this mechanism, it's easy to customize your LTSP setup to
               your needs.
           

LDM lts.conf parameters


Multiple server setup

               A multiple server setup is useful for larger thin client
               networks. Instead of using one big server, it makes it
               possible to use smaller servers, and dispatch users on them.
               You can adjust computing resources as the demand grows simply
               by adding a new server. To make sure that every server behaves
               the same from the users point of view, new services and
               configurations that are required will be discussed. In
               addition, some configurations specific to thin clients will be
               presented.
           

Infrastructure setup


Network topology

               The network topology is the same as a standalone server
               setup, except that there are more than one server on the thin
               client LAN.
           


               You will need to select one server to behave as the primary
               server. This server will be used to run additional services,
               hold users files, and network boot thin clients.
           


               Secondary servers will be used only to run desktop sessions.
               They are simpler, and will be configured to use the central
               services from the primary server.
           

Common authentication

               A user should be able to start a session with the same login
               and password, no matter which server it connects to. For this
               purpose, a central authentication mechanism must be used.
               There are many possibilities offered. Here are the major
               technologies:
           


                       LDAP authentication: On the master server, setup an
                       OpenLDAP server. Configure each servers to use this
                       LDAP server as the authentication base.
                   


                       NIS authentication: On the master server, setup a
                       NIS server. Configure each server to use this NIS
                       server for the authentication.
                   


                       Winbind authentication: Useful if you already have
                       an Active Directory server.
                   


               For detailed instructions, see their respective manuals.

Shared home directories

               Shared home directories are easy to setup using an NFS server
               on eithe the primary LTSP server, or even better, a standalone
               NFS server.  Other more modern, faster (and consequently more
               expensive) options include a SAN, and maybe even moving to a
               fibre-channel raid SAN.  Consult your distribution's
               documentation for details and suggestions for setting up an NFS
               server.
           



Managing the SSH known hosts file

               For security reasons, a thin client won't connect to an
               untrusted server. You must add the keys of secondary servers
               inside the client root on the primary server. To do this,
               first export the key file of the secondary server using LTSP's
               tools. As root, run:
           


               Then, copy the file
               
               to the primary server, in the directory
               
               and run `ltsp-update-sshkeys` on the primary
               server. Then, thin clients will trust the freshly added
               server, and will be able to connect to it.
           


               If a secondary server changes it's IP address, then this
               procedure must be repeated.
           

Setting Network Forwarding or Masquerading

               The purpose of IP Masquerading is to allow machines with
               private, non-routable IP addresses on your network to access
               the Internet through the machine doing the masquerading.
               Traffic from your private network destined for the Internet
               must be manipulated for replies to be routable back to the
               machine that made the request. To do this, the kernel must
               modify the  IP address of each
               packet so that replies will be routed back to it, rather than
               to the private IP address that made the request, which is
               impossible over the Internet. Linux uses
               
               (conntrack) to keep track of which connections belong to which
               machines and reroute each return packet accordingly. Traffic
               leaving your private network is thus "masqueraded" as having
               originated from your gateway machine. This process is referred
               to in Microsoft documentation as Internet Connection Sharing.
           


               IP Forwarding with IP Tables
           


                       to enable IPv4 packet forwarding by editing
                       /etc/sysctl.conf and uncomment the following
                       line:


                       If you wish to enable IPv6 forwarding also
                       uncomment:
                   


                       Next, execute the sysctl command to enable the new
                       settings in the configuration file:
                   


                       IP Masquerading can now be accomplished with a
                       single iptables rule, which may differ slightly based
                       on your network configuration:
                   


                       The above command assumes that your private address
                       space is 192.168.0.0/16 and that your Internet-facing
                       device is eth0. The syntax is broken down as follows:
                   

Session dispatching


Define the server list

               LDM is a login manager for thin clients. Users can select a
               server from the available ones in the host selection dialogue
               box.
           


               The displayed server list is defined by the
               
               parameter. This parameter accepts a list of server IP address
               or host names, separated by space. If you use host names, then
               your DNS resolution must work on the thin client. If defined
               in the  file, the list order will
               be static, and the first server in the list will be selected
               by default.
           


               You can also compute a new order for the server list, by
               creating the script
               `/opt/ltsp/<arch>/usr/share/ltsp/get_hosts`
               . The parameter
               
               overrides the
               script. In consequence, this parameter must not be defined if
               the `get_hosts` is going to be used. The
               `get_hosts`
               script writes on the standard output each server IP address or
               host names, in the chosen order.
           

Dispatching method

               You can change this behaviour by using a script to rearrange
               the list. The simplest way to do it is by randomizing the
               list. First, define a custom variable in the file
               
               , for example , that will
               contain the list of servers, the same way as
               
               Then, put the following script in
               `/opt/ltsp/<arch>/usr/share/ltsp/get_hosts`
           


               More advanced load balancing algorithms can be written. For
               example, load balancing can be done by querying ldminfod for
               the server rating. By querying ldminfod, you can get the
               current rating state of the server. This rating goes from 0 to
               100, higher is better. Here is an example of such a query:
               
           

Network Swap

               Just like on a full fledged workstation, it helps to have
               swap defined for your thin client. "Swap" is an area of disk
               space set aside to allow you to transfer information out of
               ram, and temporarily store it on a hard drive until it's
               needed again. It makes the workstation look like it has more
               memory than it actually does. For instance, if your
               workstation has 64 Megabytes of ram and you configure 64
               Megabytes of swap, it's theoretically possible to load a 128
               Megabyte program.
           


               We say, "theoretically", because in practice, you want to
               avoid swapping as much as possible. A hard drive is several
               orders of magnitude slower than ram, and, of course, on a thin
               client, you don't even have a hard drive! You have to first
               push the data through the network to the server's hard drive,
               thus making your swapping even slower. In practice, it's best
               to make sure you have enough ram in your thin client to handle
               all your average memory needs.
           


               However, sometimes that's not possible. Sometimes, you're
               re-using old hardware, or you've simply got a program that
               isn't normally used, but does consume a lot of ram on the thin
               client when it does. Fortunately, LTSP supports swapping over
               the network via NBD, or Network Block Devices. We include a
               small shell script called nbdswapd, which is started via
               inetd. It handles creating the swap file, and setting up the
               swapping, and removing the swap file when it's no longer
               needed, after the terminal shuts down.
           


               By default, swap files are 64 Megabytes in size. This was
               chosen to give your workstation a little extra ram, but not
               use up too much disk space. If you get some random odd
               behaviour, such as Firefox crashing when viewing web pages
               with a lot of large pictures, you may want to try increasing
               the size of the swap files. You can do so by creating a file
               in the directory  on the LTSP
               server, called . In it, you
               can set the SIZE variable to the number of Megabytes you wish
               the file to be sized to. For instance, to create 128 Megabyte
               files, you'll want: SIZE=128 in the  file.
           


               Please note that this is a global setting for all swap files.
               If your server has 40 thin clients, each using 128 Megs of
               memory, you'll need 128 * 40 = 5120, or a little over 5
               Gigabytes of space in your 
               directory, where the swap files are stored.
           

Managing DHCP

               DHCP stands for Dynamic Host Configuration Protocol and is the
               very first thing your thin client uses to obtain an IP address
               from the network, in order to allow it to start booting. In
               LTSP, the dhcpd file is located in .
               Any changes you want to make to booting behaviour should be made there.
           


               By default, LTSP ships a  that
               serves thin clients in a dynamic range (i.e. it will hand out
               ip addresses to anyone who asks for them) from 192.168.0.20 to
               192.168.0.250. The default dhcpd.conf file looks like:
           


               This  should handle most
               situations.
           


               By default, LTSP will detect an unused network interface and
               configure it to be 192.168.0.254. LTSP's recommended single
               server installation is to use a separate network interface for
               the thin clients. If, however, you're not using two network
               interfaces, or you already have an interface in the 192.168.0
               range, then you might have to configure the thin client
               interface differently, which means you may have to adjust the
                accordingly.
           


               If the network interface that you're going to connect the thin
               clients to has, say, a TCP/IP address of 10.0.20.254, you'll
               want to replace every occurrence of 192.168.0 with 10.0.20 in
               the  file.
           


               Always remember, you'll need to re-start the dhcp server if
               you make any changes. You can do this by issuing the command:
           

(at the command prompt.)


Adding static entries to the dhcpd.conf

           Sometimes, you may need to have a certain terminal boot with a
           guaranteed fixed TCP/IP address every time. Say, if you're
           connecting a printer to the terminal, and need to make sure the
           print server can find it at a fixed address. To create a fixed
           address, use a low number in the range of 2-19, or otherwise, if
           you change the range statement in the .
       


           To create a static entry, simply add the following after the
           "option root-path" line:
       


               Substitude the MAC
           address for the mac address of the thin client you wish to fix the
           address of. The fixed-address will be the TCP/IP address you want,
           and "hostname" is the name you wish to give the host. This kind of
           setup is relatively complex and the admin should have a full
           understanding of how DHCP works before attempting such a setup.
           For more information, check the Internet.
       

DHCP failover load balancing

           Another common method of load balancing is to use DHCP
           load balancing. There's an excellent writeup on the topic at:
           
       


= Lockdown with Sabayon (user profile manager) and

                   Pessulus (lockdown editor)
                =


                   A common requirement in both schools and businesses is
                   having the ability to lock down the desktop and provide
                   certain default configurations.
               


                   In LTSP, the applications you'll want to use are Sabayon
                   and Pessulus. You'll want to add them from the package
                   manager.
               


                   The Sabayon user profile editor looks like a window that
                   contains a smaller sized picture of your desktop. Within
                   this window, you can create a default layout: add icons to
                   panels and the desktop, lock down the panels so they can't
                   be modified, remove access to the command line, etc.
               


                   Once you're done, you can save your profile. You have
                   the option of applying your profile to either individual
                   users, or all users on the system. Please consult the
                   manual included with Sabayon for all the details.
               


                   More information is available here:






                   http://www.gnome.org/projects/sabayon/

Replication of desktop profiles

               If you customize user's desktop, then custom desktop profiles
               should be copied to every server. Gnome desktop profiles
               created with Sabayon are located in
               
           

Managing the thin client

           Previously, there was a program called TCM or thin client
           manager, which was responsible for checking what was happening on
           the various thin terminals, messaging between them, locking, or
           generally offering support from a master terminal. This has now
           been replaced by the use of Italc, which must be separately
           installed depending on your distribution.
       

Lockdown Editor

               By choosing a single user and right clicking on that users
               name, you will open up the context menu. From here you can
               choose "Lockdown", which will allow you to set options to
               restrict a particular user. Clicking this menu item will
               invoke the "Pessulus" program, which is the Gnome lockdown
               editor. Ticking and unticking options in Pessulus will enable
               and disable certain functions for that particular user. There
               is a padlock next to each option in Pessulus. Ticking this
               will make the option unchangeable by the user. This is called
               a mandatory setting. For further help with Pessulus, please
               refer to the Pessulus documentation.
           

Updating your LTSP chroot

           At some point in the future, updates will become available for
           your LTSP server. You must remember that although you may have
           applied all the updates to the server itself, as in the
           instructions....HERE it is likely that the LTSP chroot will also
           need updating. To do this you must open up a terminal and use the
           following commands.
       


           First make sure the Client environment has the same Package
           lists as the Server, to achieve that, you will copy the
           /etc/apt/sources.list (on Debian and Ubuntu) or the
           /etc/yum.repos.d/fedora.repo  file from the Server to the Client
           environment.
       


           Now issue the command below.

(replace <arch> with the architecture

           you are working with.)
       


           This will change your root directory to be the LTSP clients root
           directory. In essence, anything you now do inside here, will be
           applied to the LTSP clients root. This is a separate small set of
           files that are used to boot the clients into a usable, and enable
           them to contact the LTSP server. Once inside this shell, we must
           type the following command to obtain the latest list of packages
           from the apt/yum servers.
       


           on Debian and Ubuntu
       


           You need to mount  in the chroot before beginning, as some of
           the packages you install may need resources in  to install correctly.
       


           To be sure no deamons are started do the following:


           Once this has completed you will have to upgrade the software in
           the chroot by running the following command:
       

(on Debian and Ubuntu)


           or
       

(on Fedora)


           Just in case  is still mounted when you exit the chroot,
           unmount it first by doing:


           Once you're done, you must leave the chroot by either typing
           
           or by using the key combination Ctrl+D. This will return you to
           the root of the server.
       


           If your kernel has been upgraded you must run the LTSP kernel
           upgrade script, to ensure that your LTSP chroot uses the latest
           version. This is performed by running the command below:
       


           All of your clients will now use the latest kernel upon their
           next reboot.
       


           Finally, you must remember to rebuild the NBD boot image from
           your chroot with the following command:
       

(add architecture using -arch= addition)


           Be advised that this may take a few minutes, depending on the
           speed of your server.
       

Appendix I

           Here you can find some solutions to common questions and
           problems.
       

Using NFS instead of NBD

               Using NBD instead of NFS has several advantages:


                       Using a squashfs image we can now merge that
                       together in a unionfs to get writeable access which is
                       a lot faster during bootup.
                   


                       A squashed root filesystem uses less network
                       bandwidth.
                   


                       Many users and administrators have asked us to
                       eliminate NFS, for reasons of site policy. Since the
                       squashed image is now served out by nbd-server, which
                       is an entirely userspace program, and is started as
                       the user nobody, this should help to eliminate
                       concerns over NFS shares.
                   


               However, some people still want to use NFS. Fortunately,
               it's easy to switch back to NFS, if it's so desired:
           


                       On the server, use the chroot command to maintain
                       the LTSP chroot:


                       Now edit /etc/default/ltsp-client-setup and change
                       the value of the root_write_method variable to use
                       bind mounts instead of unionfs, it should look like
                       this afterwards:
                   


                       Next, create the file
                        and add the following
                       line (set the value of the BOOT variable to nfs):
                   


                       Regenerate the initramfs: 


                       Hit CTRL-D to exit the chroot now. Make sure LTSP
                       uses the new initramfs to boot:
                   

Enabling dual monitors

               First, I am going to start with a couple assumptions:


                       I will assume that you are operating thin clients
                         with an NBD file system in this write-up.
                   


                       I will assume that you are running Ubuntu 8.04.1


                       I will assume that you are running LTSP 5


                       I will assume that you are replacing a running
                       image that has been properly tested, and is working.
                   


               Create a
               new image to ensure your configuration is congruent with my
               successfully tested configuration.
           


               (note the --arch i386 command is required for my system
               because it's running an amd64 kernel. It may not be required
               for individuals running a 32-bit kernel)
           


               Download the pertinent VIA unichrome driver for your chipset
               from this web site: 
               Be sure to select the proper OS as well. The installation
               script is set up specifically for the directory structure of
               each OS, and will error out if the wrong OS release is
               installed. Next we need to move the downloaded file to the
               image directory
           


               After that, we need to chroot to the same image directory.


               Unpack the driver in the root directory 


               After unpacking, enter the directory:


               Run the file contained inside to start the driver installation
           

(The following error: "VIAERROR:The /etc/X11/xorg.conf is

               missing!" Can be ignored. We will be replacing the xorg.conf
               anyway, and the drivers are still installed properly.)
           


               Next we need to put a proper xorg.conf in to the proper
               directory.
           


               Now paste the following in to the empty file:
           


                IN THE ABOVE SECTION PASTED INTO THE XORG.CONF, NOTICE
               THAT THERE ARE RESOLUTIONS SPECIFIED PER MONITOR. PLEASE
               ENSURE THAT YOU HAVE THE PROPER RESOLUTIONS FOR YOU YOUR
               MONITOR ENTERED ON THOSE AREAS. Be sure to save the file as
               xorg.conf and exit out of your chroot'd image. <CTRL+D or
               "exit"> next we need to put in an addendum in the
               lts.conf
           


               Feel
               free to comment out anything that you need to in the lts.conf.
               I will include my full lts.conf as an example:
           


               Feel free to copy-paste this in it's
               entirety if you want, but you will only need the last line.
               After you add the X_CONF line, save & exit. Now we need to
               make the changes that we have made take effect in the image
           

(again, the --arch i386

               will not be required for most, but I am putting it in just in
               case a user has a x64 installation on their server.) That
               should do it! Boot up the client and you should be good to go.
Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox