Chris Cooke cc@inf.ed.ac.uk
The SANs can be accessed by direct fibre connection or over the SRIF high speed network using either block level (iSCSI) or file level (NFS) protocols.
The School's Beowulfs will be connected directly to the School SAN, as can other research machines that need high speed local disk storage.
Further details are available on the web.
If you have large scale data storage requirements, or a requirement for high speed data communications between School sites please contact Support.
Alastair Scobie ascobie@inf.ed.ac.uk
This ability to display a view composed of fields from several tables is extremely useful when one bears in mind that there are over 150 different tables that are currently used in the School Database.
When the Division of Informatics was formed in August 1998 the DAI database became the Informatics database and enabled staff on each of the four sites of the Division to work together on a single database rather than relying on private local PC-based databases. I think that it can be argued that having this central multi-user networked database for the Division and now School of Informatics has been a very strong unifying force. In the summer of 2001 the service was moved to hedwig.dai.ed.ac.uk, a Sun Blade 1000 with 1GB of memory and three external 18GB disks attached to a UPS (uninterruptible power supply).
Further changes to some of the tables and many of the database reports have recently been or are in the process of being made because of the move to semesterisation and the Curriculum Project.
Ken Dawson ktd@inf.ed.ac.uk
The wide variety of ageing Sun hardware in use at the various Informatics sites has been replaced by 6 SunFire 280R rack mounted servers, phoenix and roc at Appleton tower, sphinx and wyvern at Kings Buildings and pegasus and hippocampus at Buccleuch Place. These servers are connected to the network by a Gigabit ethernet interface and feature dual processors, dual system disks and dual power supplies connected to a UPS for maximum redundancy. These machines are configured via LCFG.
The storage attached to these machines provides an interesting illustration of the direction storage technology is taking. Each server has two 72G Fibre channel internal disks. Half of each disk is used for the operating system (one disk contains a mirror of the other disk constantly updated so that in the event of a disk failure, the server can continue to run using the remaining disk) and the other half is used for user data. It will be seen therefore that even without any external storage attached, each of the new servers has 72G of user disk space at its disposal.
Impressive though this is, it isn't enough to satisfy the voracious appetite of Informatics users for disk space and so each of the servers has some form of external storage attached.
Phoenix, roc and pegasus have what might be termed an older style of storage. Each has a SCSI3 JBOD (a great acronym standing for 'just a bunch of disks') containing 6 36G disks attached meaning that each of these servers has 288G available for user files.
Many of you will have heard of RAID in the context of disk storage. RAID (which stands for Redundant Array Of Inexpensive Disks) potentially offers many advantages in terms of reliability and performance but until relatively recently, the 'I' in RAID was somewhat relative since the disks used within the array were usually SCSI devices restricting large scale use of RAID to high end (and expensive!) applications. Within the last two years or so, a new class of storage device has appeared, the ATA based storage array which uses cheap ATA disks, normally found within PCs, rather than SCSI disks to make up the RAID array. ATA disks have traditionally been regarded as being slower and less reliable than SCSI disks but clever circuitry and the use of redundant disks goes some way to overcoming this disadvantage.
We have two such devices hosting home directories within Informatics, a Nexsan ATAboy at Kings Buildings and a Nexsan ATAbeast at Buccleuch Place. Each of these contains 14 250G ATA disks for home directory use which would seem to imply that each device has a total of 3500G available but in fact one disk is reserved as a hot spare and the equivalent of another is used by RAID for parity checks in each array so we only have 3000G available.
The other point of interest about these arrays is that they are fibre channel devices. Traditionally, storage devices have been attached to a single host over a distance of a few meters at most. Fibre channel allows multiple hosts to access the same storage device over far greater distances so that a server at say Appleton Tower could directly access the Kings Buildings ATAboy. At present, we are using this capability to share each array between the two servers located at the same site. There will be more about Fibre Channel in a forthcoming newsletter article about the SAN (Storage Area Network) Informatics is currently setting up.
This new storage allows us to substantially increase the default disk quotas allocated to the various classes of informatics users. The new quotas are:
Please do not regard these quotas as targets to be met! It seems to be a universal law of computing that the disk space used will always expand to match the space available. We hope that this will not be the case in Informatics for several years to come and would encourage you to keep deleting unwanted files and archiving material for which immediate access is not required to media such as CD. Support will be happy to help you with any questions you might have about how to go about this.
Craig Strachan cms@inf.ed.ac.uk
We are instead joining the University "central" service, which as well as being much more secure will provide both staff and students with uniform access across many more areas of the University. Already 2/3 of our access points operate on the "central" network, and we intend to add to these to improve coverage of those Informatics areas where reception is currently poor.
DICE managed laptops which have been upgraded to RedHat 9 have a simple
way to connect to the rest of Informatics: once the machine has attached
to the "central" network, the command renc
should be used to renew your
authentication credentials in the usual way; and once that has been done
the command infvpn
will bring up an encrypted tunnel, making the
machine appear as though connected directly to an Informatics network port.
There is no need to register with EUCS to use this facility.
Users of self-managed laptops currently do have to register with EUCS, and either use the web-based authentication mechanism or install the VPN software. Users of the central VPN service are able to access their Informatics files and print to Informatics printers by connecting to our samba service Documentation explaining more is available. for more information.
As an alternative, we are evaluating the OpenVPN package which we already use on DICE with a view to providing simple instructions for its use on self-managed Linux and Windows machines.
There are special provisions for visitors to use the "central" wireless network. The University is participating in a pilot "location independent networking" project by which visitors from participating institutions can authenticate via their home University. For those not covered by this scheme, the support team can issue temporary logins as necessary.
Please note that if your laptop is set up for any kind of peer-peer wireless networking you MUST TURN IT OFF before operating around the University. Failure to do so may break the wireless network for other users, even if you yourself are not using it.
See for more details and links to wireless-related pages.
George Ross gdmr@inf.ed.ac.uk>
We now have a proposal for a compromise configuration which provides
much of the freedom of a self-managed machine, with many of the
benefits of a "centrally-managed" one. We have a number of prototype
machines running, and we believe that most of the technical issues can
be overcome without undue effort. However, there are significant
issues concerning the level of support that would be available, and
the amount of effort that could potentially be diverted into
supporting a very small number of users. There are also issues
concerning some restrictions that are desirable for security reasons.
We definitely want to ensure that the proposed solution meets people's
requirements, before committing to the work involved -- this includes
an agreement on the support and security issues which are likely to be
more important than the technical details. A paper describing the
proposal is currently being discussed by the COs and will be made
available for comments very soon. If the feedback is generally
positive, and the support issues can be resolved, then this will be
presented to Computing Committee for approval and should be available
in a trial form early next year.
Paul Anderson dcspaul@inf.ed.ac.uk
Alastair Scobie ascobie@inf.ed.ac.uk
Lightweight DICE Machines
Feedback on the recent DICE strategy review confirmed the existence of
a strong demand from many users for greater control over the
configurations of their desktop machines, without the difficulties and
effort required to maintain a completely "self-managed "system. It
had always been our intention to provide this, but we have raised the
priority in response to the received comments.
Redhat 9 Laptops -- A New Model
For some time now, we have been unable to buy any new laptops to
run DICE as we couldn't find a model that would run DICE satisfactorily.
I'm glad to report that we have now found a couple of models that we are happy
to support - the IBM Thinkpad T41 and T42. We are hopeful that we can
also support the IBM Thinkpad X31 (a lightweight model) shortly.
Mail Team News
The long promised new mail server has been installed and has replaced
the old Redhat 7 based machine. The new machine has two 2.8GHz Xeon
Processors; 2GB of RAM; two redundant power supplies powered through a
UPS; and two hardware RAID1 containers, one for the Operating System,
the other for the mail data. All this should help ensure a high
availablity of the service.
IMP
The new machine also has a new look to the
web interface
to the web interface
which is more configurable and has better
search and address book functions.
oldusername@dai.ed.ac.uk
to (normally) newusername@inf.ed.ac.uk
but it may also redirect to external email addresses, otherwise it
just rejects the mail with user unknown.
The process of identifying all existing @dcs.ed.ac.uk addresses and
where the mail should be forwarded to has begun. This will take a
couple of weeks, while any old DCS .forward or .procmailrc files, that
are still in effect (most are being ignored), are checked for a suitable
forwarding address. Once this has been done, the mail service will be
switched to the VMR and the last DCS service can be turned off.
This should not affect any current DICE users. However, any users
who have managed to bypass the existing redirection will find their
.forward
and .procmailrc
files will become ineffective after this point.
To be able to carry out their duties, the mail team members may also occasionally see the contents of mail messages and mail folders, particularly if requested by a user to sort a specific mailbox problem (in which case their permission will be sought), or to stop and track down mail loops. In this second case it usually isn't possible to contact the user and so we just have to use our best judgement.
At all times users' privacy is paramount. This is not a responsibility the mail team members take lightly.
Neil Brown neilb@inf.ed.ac.uk
The network team will shortly begin reviewing the requirements for Gigabit connections at Appleton Tower, Buccleuch Place and Forrest Hill.
George Ross gdmr@inf.ed.ac.uk>
Since the South Bridge fire, the old DAI web pages have been hosted on
DICE hardware, so this just leaves the legacy Cogsci web services to
be moved. They are currently hosted on an old Solaris server, but a
new DICE RH9 server has been allocated for the task. Roger Burroughes (roger@inf.ed.ac.uk)
is currently running a pilot version of these services on the new machine, and
those affected have should have been contacted.
Remember that the goal is to eventually freeze the content of the
legacy www.dcs.ed.ac.uk, www.cogsci.ed.ac.uk and www.dai.ed.ac.uk
sites. And that all active web content should be hosted on a
.inf.ed.ac.uk URL.
Web Team News
The switch to hosting the legacy DCS web pages, www.dcs.ed.ac.uk, on
DICE managed hardware went relavitely smoothly. There are still some
broken pages, mainly Perl CGI scripts, that just need their path to
Perl updated to /usr/bin/perl
. Please check your
pages and scripts, if you still expect them to be accessed by the outside
world.
Web Logs
One common complaint is the lack of access to web server logs. If we
could just let people see them, then we would. It would relieve the
web team of the task of acting as intermediary when responding to
requests as to why someones CGI script is not working. We are working
to
the JaNET guidelines
and unfortunately the Data Protection Act and Regulation of
Investigatory Powers Act which means that we cannot allow unrestrained
access to the logs because:
Neil Brown neilb@inf.ed.ac.uk
renc
command to renew their
Kerberos credentials and not kinit
. This is primarily
because renc
also renews KX509 credentials which we are
increasingly using to provide seamless access to authenticated
web services. There will be more on kx509 in the next newsletter.
Tim Colles timc@inf.ed.ac.uk
Do let us know if there are any subjects you would like to see covered in the next issue.
Morna Findlay morna@inf.ed.ac.uk
To contribute hints or tips to the next newsletter, please contact the Documentation Team
~/.mtoolsrc
with the following content:
drive d: file="/dev/sda1" exclusive
mtools_skip_check=1
then you
can access the stick directly as the d:
drive using the
mtools
commands,
e.g. \mbox{\texttt{mcopy myfile d:}}
Chris Walton cdw@inf.ed.ac.uk
~/xemacs/init.el
file. This affects any language mode based on the
xemacs c-mode (e.g. Java):
Chris Walton cdw@inf.ed.ac.uk
Of course, the real solution is to check the source of errors but as things work well otherwise this may be a waste of time.
Yuval Krymolowski ykrymolo@inf.ed.ac.uk
Chris Williams ckiw@inf.ed.ac.uk
David Sterratt david.c.sterratt@ed.ac.uk
Informatics Forum, 10 Crichton Street, Edinburgh, EH8 9AB, Scotland, UK
Tel: +44 131 651 5661, Fax: +44 131 651 1426, E-mail: school-office@inf.ed.ac.uk Please contact our webadmin with any comments or corrections. Logging and Cookies Unless explicitly stated otherwise, all material is copyright © The University of Edinburgh |