ROC Europe du Nord ------------------ http://indico.cern.ch/materialDisplay.py?subContId=4&contribId=0&materialId=0&confId=40432 TRIUMF ------ http://indico.cern.ch/getFile.py/access?subContId=3&contribId=1&resId=0&materialId=0&confId=40864 ROC Italien ----------- There is a general consensus to support the Dutch resource Centres Position Statement document since many problems can be originated by the proposed mechanism. We also remind that in the italian ROC the official m/w release is INFN-GRID release. It is based on the standard EGEE release but it contains some additional components. Before being deployed, INFN-GRID is tested, certified and the appropriate yaim scripts are produced. So it is very likely that a centralized mechanism to install/update the WN, without the appropriate testing/certification of the italian additional components against new releases of some gLite components, breaks the production environemnt. ROC CE ------ Rather negative as it looks like centralization breaking the idea of Grid, problems with independence of NGIs etc. ROC SWE ------- Here some comments from the SWE federation on the issue "centralized distribution of gLite client software": 1) About installation testing It will be a sam test to verify WN middleware installed as VO manager sam tests? Or this task will be done by site admins? Sometimes VO installed software by VOs admins has errors, this will affect site reliability? 2) The LHC VOs already use NFS for their "own" software (and some of them already deploy their gLite clients). However other VOs, presently, do not depend on the avalability of NFS filesystems. An overall gLite client installation via NFS (or other shared filesystem) originates a new point of failure for these VOs and for the sites. 3) The introduction of this shared space also imposes the adoption of shared file systems for clusters that do not support LHC VOs. 4) Overall this is a new single point of failure that will affect all VOs hence decreasing site reliability. Will it compensate to decrease the reliability in favour of faster deployment (????). As a production service EGEE should be much more worried about site reliability. 5) Will the scalability and reliability of the new deployment mechanism be evaluated? As CESGA people pointed out, what's the performance impact? How many nodes we can have sharing same software? I think it is quite important to have a feeling of this numbers, especially for T1 and T2 sites... 6) Clients could need certain version of libraries. Libraries updates are of the site admin responsibility. How will this be coordinated? 7) In terms of security the suggested approach will decrease the site security. If this method goes ahead there will be the possibility of other users beside the root user to install client commands and software that will be used by all grid users of the site regardless of being from LHC VOs or not. Consequences: a) Attacks replacing the client images in the NFS server will be harder to track and much more sucessful in spreading as the files are shared across all workernodes b) Easier site attack since not only the root can install grid client commands that are used by all grid users at each site. These are not VO specific commands, these are general commands used by all users. c) Higher difficulty for the site managers to verify changes in the installed images. For instance with rpms we can easilly verify the integrity of the images thanks to the checksums, however this will not be possible with tarballs. d) The site manager will not be aware of the changes (they are done remotely). This defeats some security approches such as the checksumming of the systems installations with tools such as tripwire or others. f) Since the images are installed in a shared space, a successful attack that replaces command images in a misconfigured workernode will be quickly propagated to the other workernodes. 8) In the past considerable efforts have been committed to develop job-managers such as LCGPBS to reduce the dependency on NFS filesystems and increase the site reliability. Now it seems that we are moving in the opposite direction. 9) Many sites use the bewoulf cluster approach where each worker node is fully independent from the other worker nodes (no shared filesystems at all). 10) We would be glad to see the opposite approach, wich means decrease the sites dependency on shared filesystems and have a method that would enable site admins to have the VO software installed locally in each worker node. 11) Many sites have their local installation highly customized this is particualarly true today because of interoperability with NGIs and regional infrastructures. The centralized installation will likely break this. 12) Presently, some of other grid projects and NGIs rely on the gLite middleware. The installation of client tools could break the interoperability with NGIs. With respect to this point, NGIs will like want to have a severe control of what is installed and when is installed in order not to break there local releases... 13) From the political point of view, I think that this centralized distribution of the gLite clients on the WNs could give the wrong message to the NGIs. EGEE III is strongly involved in preparing EGI via NGI interoperability. Establish a central EGEE body to deploy clients on the WNs goes against this philosophy. EGEE-III should be about decentralization to the NGIs not the opposite. 14) In the end how many sites want to use this new installation approach and how much effort will be commited to support it ??? Will it compensate ? 15) Meaning, from our experience, all except the LHCH VO's are the most keen to have the latest release with the latest features of the glite MW. The other VO's seem, from our experience, been in general "happy" with what the production "glite" MW offers. On the other hand there are in fact some classes of features which were never properly supported which have been requested over and over again by almost all the VO's such as MPI, proper client tools support for other OS's and ARCH's just to name some. The centralized client installation implies that only sites running WN's in SL(C)4 intel arch's are targetted. Or ..., is it planned to have those clients for debian/RHEL5( and clones), Ittanium, PowerPC, just to name a few. For the sake of this matter, who will profit and use this "new feature" I will "bet" (Jorge rephrase this) that for sure the LHC VO's will not since they already "ship" their MW clients with their SW.