We have run the test on a standard Linux workstation with a Gigabit
Ethernet connection to the GSI-ROOT RSE. Software environment:
rucio-clients-1.23.6post1, gfal2-bindings-git20200604, gfal-2.18.0.
Authentication: x509_proxy mode, with initial certificate issued for 96
hours and VOMS extensions renewed every 8 hours using 'voms-proxy-init
--noregen'. Steering: two shell scripts (one for ingestion proper, one
for certificate renewal) run in a terminal multiplexer.
2. Test run
* total run time: approx. 72 hours
* rate limiting: random delay between steps, of between 30 and 60 minutes
* for each step, 1 in 5 chance of it being a download; otherwise an upload
* generate a 1GB file and upload it with 'rucio upload --rse GSI-ROOT
--scope fair_ingest'. Then:
* 1 in 3 chance of requesting a replica at DESY-DCACHE, or
* 1 in 3 chance of requesting a replica at 'QOS=A' (which class at
the time of writing this consists of three RSEs: EULAKE-2,
IN2P3-CC-DCACHE, and QOS-A-PIC (which in turn has got two endpoints, one
at PIC and one at SURFsara)), or
* no further replicas expected
* downloads: fetch one of the previously uploaded files with 'rucio
download' (i.e. without requesting specific source)
* initial intermittent upload problems. Identified as caused by recent
configuration change on the server side which broke xrootd protocol for
most sites (see RocketChat). Fixed.
* 46 files uploaded successfully
* the last 5 upload attempts failed due to GSI-ROOT having run out of
storage space. This was a mistake in server configuration, fixed - now
xrootd stores data on the correct file system.
* replication to DESY-DCACHE: requested 13 times. 100% success rate.
* replication to QOS=A: requested 17 times. 6 successful (all of them at
IN2P3-CC-DCACHE), 1 labelled as REPLICATING for around 2 days now, 10 STUCK.
* no problems observed with downloads but unfortunately I have neglected
to log which RSE each file has been fetched from, it might have been useful.
* overall not bad but have to investigate in the FTS Monitor why
replication to QOS=A fails so often
* should consider the use of MyProxy or something similar for
longer-running and/or batch jobs in the future.
In the event that you do not have a rucio client environment mounted, I recommend that you mount it through docker. Here's an example: docker run -e RUCIO_ACCOUNT = bruzzese -v ./usercert.pem:/opt/rucio/etc/usercert.pem -v ./userkey.pem:/opt/rucio/etc/userkey.pem -it -d --name = rucio-client projectescape / rucio-client
In principle Rucio accepts different types of authentication. Some of them are through username / password, and (among others) through iam proxy.
The different ways to authenticate can be changed in the file "rucio.cfg". Normally located in the path "/opt/rucio/etc/rucio.cfg".
You will see that in this file there are multiple options. You will usually see a structure that looks like this:
First, you will have to put your account, which corresponds to the user (name) registered in the escape project (in my case "bruzzese").
Then, of these options it is important that you look at "auth_type". Which establishes the way to authenticate with the RUCIO escape server. In my case, I chose the userpass option. What you need from the "username" and "password" options
Even so, and as I mentioned before, there are others that in your case may be better for you: x509, or x509_proxy, which will require a client certificate (client_cert), and a private key (client_key), or if you have already generated a proxy (voms-proxy-init -cert -key -voms escape) with your certificates, because the path for your proxy (in my case client_x509_proxy = / tmp / x509up_u1000)
Once you have it assembled, I would try the following:
[user @ rucio-client ~] $ rucio whoami
If everything went well, it should return something like this:
created_at: 2020-02-17T14: 23: 59
updated_at: 2020-02-17T14: 23: 59