controls-fel AT lists.psi.ch
Subject: SwissFEL Controls Architecture
List archive
- From: "White, Greg" <greg AT slac.stanford.edu>
- To: Zimoch Elke <elke.zimoch AT psi.ch>, Leonardo Sala <leonardo.sala AT psi.ch>
- Cc: "controls-fel AT lists.psi.ch" <controls-fel AT lists.psi.ch>
- Subject: [Controls-fel] Minutes from 16.04.2013 meeting on release escalation.
- Date: Tue, 16 Apr 2013 06:19:53 -0700
- Accept-language: en-US
- Acceptlanguage: en-US
- List-archive: <https://lists.web.psi.ch/pipermail/controls-fel/>
- List-id: SwissFEL Controls Architecture <controls-fel.lists.psi.ch>
Hi Elke, Leo,
Please find minutes of today's meeting below. I added the 2 use-case examples
of makefile to help clarify the requirement for the "rsync" like tool.
What do you think of the idea of just letting the mail list archive do the
archiving of minutes? I think I'll stop putting a specific file of minutes in
Alfresco and just let the list archive do it. That way comments get
associated with minutes better.
Cheers
Greg
----------------------------------------------------------------------------
SwissFEL Controls Software Architecture Meeting.
Present: EZ, LS, GW
Scribe: GW
**********
TOPIC: Terms of reference for team tackling release escalation support.
**********
EZ: There are presently three levels of sw, and they're handled differently
for release:
Driver - driver.makefile
IOC - swit
Host - "Make app". This we don't know how to do
EZ: Driver.makefile is good, it doesn't need much if any modification.
EZ: Also, Swit works for IOCs, but we don't know whether to use it for app
level.
At the moment we use swit for app level too, but it doesn't work well.
Eg it copies everything to /fin/devl/ and /fin/work/ but doesn't check what
was there.
So we have issues in WHLA.
LS: So we need to build the app release system?
EZ: yes.
LS: We need different versions at different levels of release of the same
system for apps.
EZ/GW: yes
GW: So the essential difference between ioc and host level w.r.t. sw is that
at IOC level only a single level of release can be running on a machine (IOC)
at one time, but at host level a number of versions must be available on one
host.
EZ: Yes, and that is true of driver level too.
EZ: one thing driver.makefile is ding right, is that it works, and so people
do use it. As soon as the tool is the easiest way to do work, people will use
it.
+2.
GW: So we want to direct the infr. team to make a release system on the host
level, that handles different versions of the same software on the same host?
EZ: Yes.
GW: And it might be based on puppet or swit.
EZ: Well, puppet is mainly for libraries [of the system, and therefore
usually in dirs like /lib, that is not a mounted disk].
EZ: Also, it must be something I can use at the command line, specifically
not configured on the web.
+2.
LS: You need something you can script.
+2
GW: equirements
- scriptable configuration by a programmer.
- documented procedure how a release is going to happen.
- A release must plainly not have the potential to mess up the accelerator
- Must inlcude caQtDM code (the core of caQtDm) and other apps.
EZ: I would like the executable and libs of app level to be "installed
locally".
GW: What do you mean by installed locally, do you mean a local disk of every
host?
EZ: yes, so that when I disconnet the host from the network, the machine
still works.
LS: So, the reference areas [work, devl, prod] are remotely mounted?
GW: Do you mean AFS?
EZ: No, I think NFS.
+2
GW: At SLAC we do release to a mounted disk, and we have not had serious
issues with that, still, I would agree with you, releasing to the "well known
OS directories" would be better, at least for the final production level
release.
Gw: We could distinguish between 3rd party sw, and sw PSI develops: 3rd party
could go into the well known places of the OS, and PSI sw into a mounted disk?
EZ: I would rather we release all sw to one kind of place [the well known OS
places, eg /bin, /lib].
GW: Ok, but it surely will not be the case that 3rd party sw will also be
required to through the release staging system we build.
[We whiteboarded how and where staged release of application level sw may be
done at PSI]
Requirements list
-----------------
1. The command to release must be command line execution, NOT web based.
If config is needed, then by file, not web app.
2. Must be able to develop in own machine (Linux PC or even windows), or
laptop.
3. First level of release will continue to be to /fin/devl/. Call this "beta"
release.
4. Then release to production hosts from what is in /fin/devl/
5. We want the same directory structure under /fin/devl/ for each kind of sw
(bin/, lib/ etc/) as on the target production hosts.
6. Release escalation should include some form of logging. It would include
what is being release escalated and developer enters what why they are doing
a release escalation.
7. It must be that a developer CAN if necessary develop and run sw on the
production network, and point production systems to use it, without going
through the CMS (ie CVS) and release escalating. However, the system should
be so designed as to make such fixes definitely temporary.
8. So think the basic tool will be make, with targets "beta" to release to
/fin/devl/ and production.
- one target of the makefile is "installbeta" - it copies (local rsync) to
fin/devl/
- one target of the makefile is "installprod" - it remotely rsyncs to
usr/local/..
9. We need to define clearly what will be the uses of the existing
directories /fin/work/ and /fin/prod/ for SwissFEL, given that we now want to
do remote distribution (to the local disk, ie /usr/local/bin).
Use Case Examples:
-----------------
1. "root" OS stuff, like a server startup file.
The following example makefile releases the server startup script
"rdbservice" to /fin/devl/etc/init.d in the first instance (beta) and then to
/etc/init.d/ (prod) on the host that is going to host that service (gfa-e4).
Such a makefile may be written as:
#!/usr/bin/gmake
#!-*- make -*-
#
# Abs: Releases server startup script of EPICS V4 rdb service.
#
# ============================================================================
betainstall :
rsync -Cav --exclude '*~' rdbservice \
/fin/devl/etc/init.d
# Exectute make as root (sudo make install).
install :
rsync -Cav --exclude '*~' rdbservice \
gfalc6064d:/etc/init.d
rsync -Cav --exclude '*~' rdbservice \
gfa-e4:/etc/init.d
2. Take a makefile which must build and release some Java classes,
executables, and scripts:
#!/usr/bin/gmake
#!-*- make -*-
#
# Abs: Compiles and builds psiRdbService and can be used to install
# build products and exectables.
#
# ============================================================================
JAVAC=javac
classes/%.class : src/%.java
$(JAVAC) -sourcepath src -classpath $(E4_CLASSPATH) -d classes $<
sources = $(wildcard src/ch/psi/rdbService/*.java)
classfiles = $(patsubst src/%.java, classes/%.class, $(sources))
all : $(classfiles)
clean :
find classes -name "*.class" -exec rm {} \;
betainstall :
rsync -Cav --exclude '*~' classes bin script \
/fin/devl
install :
rsync -Cav --exclude '*~' classes bin script \
host1:/usr/local
rsync -Cav --exclude '*~' classes bin script \
host2:/usr/local
rsync -Cav --exclude '*~' classes bin script \
host3:/usr/local
So, the primary thing we need is a command which acts like rsync (and may in
fact be
rsync) that can:
1. can write to the root OS directories (/etc/sysconfig/and /etc/init.d for
instance) to write system configuration and startup scripts etc.
2. write to /usr/local/ subdirectories (/usr/local/bin/, /usr/local/lib etc)
to release PSI developer level sw (displays, matlab etc), for each host on
which such sw must run.
Related Questions
1. Presumably unix (bash) executable scripts will go in bin/ places. So where
should unix bash "sourced" scripts go? Also in bin/ or somewhere especially
for sourced scripts? Note that bash will look for source scripts in the PATH.
- [Controls-fel] Minutes from 16.04.2013 meeting on release escalation., White, Greg, 04/16/2013
Archive powered by MHonArc 2.6.19.