Le blog de pingou

To content | To menu | To search

Tag - Documentation

Entries feed

Thursday, February 17 2022

Using quay.io to host multi-architecture containers

Recently I have worked on a container that I wanted to be able to use on both x86_64 and aarch64 (in other words, on my regular laptop as well as on my raspberry pi). I could build the container on the laptop, push it to quay.io but then the container was failing to start on the raspberry pi, and obviously, the other way around as well.

The question was then: how to make it so that I could easily pull the container image from quay.io for both architectures?

The answer is basically:

  • Build the container for x86_64 and push it to quay.io with a dedicated tag
  • Build the container for aarch64 and push it to quay.io with another dedicated tag
  • Build a container manifest which points to the two tags set above
  • Push that container manifest to quay.io to the latest tag



Here are the corresponding commands:

Build the container (the same command can be used on both machines):
podman build -t <tag> -f <Dockerfile/Containerfile>

for example:

podman build -t remote_builder -f Containerfile
Log into quay.io:
podman login quay.io
Push the image built to the registry:
podman push <image_id> <registry>/<user>/<project>:<tag>

for example:

podman push 56aea0cde6d2 quay.io/pchibon/remote_builder:aarch64

Once these commands have been done on both architectures, you can check the project on quay.io and you should be able to see two tags there.

Create the manifest linking the two tags to the `latest` one:
podman manifest create <registry>/<user>/<project>:latest \
   <registry>/<user>/<project>:<tag-1> \
   <registry>/<user>/<project>:<tag-2>

for example:

podman manifest create quay.io/pchibon/remote_builder:latest \
    quay.io/pchibon/remote_builder:aarch64 \
    quay.io/pchibon/remote_builder:x86_64
Finally, push the manifest to quay:
podman manifest push <user>/<project>:<tag> <registry>/<user>/<project>:<tag>

for example:

podman manifest push pchibon/remote_builder:latest quay.io/pchibon/remote_builder:latest


Note, for some registry (such as the ones hosted on gitlab.com), you may need to specify the manifest version to use, you can do so by using:

podman manifest push --format v2s2



The one question I do not know yet, but will have to check is: does the manifest need to be re-generated/updated everytime the tags are updated or will the latest tag always point to the latest image of each tag.

Tuesday, June 8 2021

Screencast and editing

Recently I have had to prepare a couple of demo about some work I have been doing. As my internet connection isn't the fastest, I chose to do a screencast that I could then upload somewhere and share. This prevented issues with my internet as well as gave me the possibility to show the full thing by editing the recording.

However, I ran into a few problems.

I first tried quite a few screencast apps:

  • The screencast tool that is built-in in gnome (simply pressing ctrl+alt+shift+r). However, this is recording only very short screencast by default, changing this default meant editing a configuration value in dconf (and thus having some idea beforehand of how long the recording will be).
  • recapp, that one simply didn't start me for
  • peek, that one seemed to work but intercepted all my mouse clicks, so I could only navigate with the keyboard, could not highlight anything with my cursor and when I looked at the recording, it was all black
  • SimpleScreenRecorder, could not seem to be stopped once started
  • OBS studio, recorded a black screen

After all this, I gave up and re-log into my session using X11 instead of Wayland. Suddenly all of the screencast app worked fine... :)

So once I was able to record what I wanted to show, I still had over 10 minutes of video to show at a demo review, so I wanted to edit it, cut the parts where there are no progress, increase the speed of the parts where things are happening but do not need to be shown real-time (for example, when a system boots, or when it is being installed, if the speed there is x2, it is fine).

I've looked around at different tools and found:

  • kdenlive
  • VidCutter
  • Video Trimmer
  • ShotCut

I ended up settling for kdenlive for two reasons:

  • someone I know uses it and recommended it to me (thus I knew it is able to do what I was looking for)
  • I've found this tutorial on youtube explaining me exactly how to do what I wanted to do:





The kdenlive UI changed a little bit since this video was recorded (like the "Change speed" button is now available via a "right click" on the video track) but this tutorial is enough to give you some basis on video editing with kdenlive.

Wednesday, May 26 2021

Clearing up repositories meta-data when using Image Builder

If you use Image Builder to create image (say OSTree images) and you've added to the sources a RPM repository that changes its metadata frequently (say a copr repo), you may run into this situation:

  • A build failed because of a missing dependency
  • You fix the situation by fixing your (thus frequently changing) repo
  • You start a new run of Image Builder
  • The new run fails with the same error

This is because the RPM metadata are too recent and thus not updated by Image Builder.

If you are debugging things, this can quickly become annoying. The way I have found to clear all the repo metadata is simply to do:

sudo rm -rf /var/cache/osbuild-composer/rpmmd/*

just before you kick a new Image Builder run. This will ensure new metadata are downloaded (and thus updated).

Creating virtual machines in your home folder

I use virt-manager to create and manage the virtual machines running on my laptops. However, by default, virt-manager wants to store the disk image for any new virtual machine it creates on /root/.local/share/libvirt/images/... (on Fedora 34, I remember it was on /var/lib/ on some earlier releases).

The issue is that my / is on a different partition than my /home and this one is much larger (400G) than / (60G). So if I place the disk images on either of the two locations above, I will end up quickly filling up my / which causes all kind of problems :-)

Since last week, I've created a bunch of virtual machines so I needed to figure out this storage location question.

I ended up finding out that I can simply create an empty qcow2 image that I can then use with virt-manager.

To create the qcow2 image simply run:

qemu-img create -f cow2 <name>.qcow2 <size>

For example:

qemu-img create -f qcow2 os_tree_gnome.qcow2 20G

This creates a 20G disk image named os_tree_gnome.cqow2, which can then be used by virt-manager.

To do this, when creating a new virtual machine with virt-manager, select import existing disk image, then browse to the qcow2 image you just created. Select the name of the OS, specify the number of CPUs and the memory the VM will have and on the last screen click on Customize configuration before install. This will allow to see the settings of the virtual machine before it is installed. That allows you to click on Add Hardware of type Storage. There select Device type and make it a CDROM device which allows you via the Manage button to pick the iso of your choice (boot.iso, dvd.iso...). Once the CDROM device is added, go to the Boot Options in the settings of the virtual machine and ensure that the CDROM is checked.

At this point you can click on Begin Installation and the VM will boot from the qcow2 file, which is empty so it will fall back on booting from the CDROM device which contains your ISO.

Monday, September 23 2019

Retrieving the monitoring statuses of someone's packages

Recently we announced on devel-announce the upcoming changes to integrate anitya with dist-git.

Following this announcement, Till H. asked if there was a way to have an overview of the monitoring status of all the packages they maintain. I have replied that there is no such thing today but that I could cook up a little script to help with this.

So here it is: get_monitoring_status.py

Since the feature is only enabled in staging, you will need to specify the staging dist-git server for the script to be meaningful.

Here is a small example of output:

$ python3 get_monitoring_status.py pingou --url https://src.stg.fedoraproject.org/
rpms/ampy                                                              : monitoring-with-scratch
    https://src.stg.fedoraproject.org/rpms/ampy
rpms/audiofile                                                         : no-monitoring
    https://src.stg.fedoraproject.org/rpms/audiofile
rpms/boom                                                              : no-monitoring
    https://src.stg.fedoraproject.org/rpms/boom
rpms/bugwarrior                                                        : monitoring-with-scratch
    https://src.stg.fedoraproject.org/rpms/bugwarrior
rpms/datagrepper                                                       : no-monitoring
    https://src.stg.fedoraproject.org/rpms/datagrepper
rpms/datanommer                                                        : no-monitoring
    https://src.stg.fedoraproject.org/rpms/datanommer
...

Tuesday, January 5 2016

Setting up pagure on a banana pi

This is a small blog post about setting up pagure on a banana pi.

Continue reading...

Friday, December 11 2015

Testing distgit in staging with fedpkgstg

Every once in a while we make changes to dist-git in the Fedora infrastructure. This means, we need to test our changes to make sure they do not break (ideally, at all).

These days, we are working on adding namespacing to our git repos so that we can support delivering something else than rpms (the first use-case being, docker). So with the current set-up we have, we added namespacing to pkgdb which remains our main endpoint to manage who has access to which git repo (pkgdb being in a way a glorified interface to manage our gitolite). The next step there is to teach gitolite about this namespacing.

The idea is to move from:

 /srv/git/repositories/<pkg1>.git
 /srv/git/repositories/<pkg2>.git
 /srv/git/repositories/<pkg3>.git
 /srv/git/repositories/<pkg4>.git

To something like:

 /srv/git/repositories/rpms/<pkg1>.git
 /srv/git/repositories/rpms/<pkg2>.git
 /srv/git/repositories/rpms/<pkg3>.git
 /srv/git/repositories/rpms/<pkg4>.git
 /srv/git/repositories/docker/<pkg2>.git
 /srv/git/repositories/docker/<pkg5>.git

But, in order to keep things working with the current clone out there, we'll symlink the rpms namespace to one level higher in the hierarchy which should basically keep things running as they are currently.

So the question at hand is, now that we have adjusted our staging pkgdb and dist-git, how do we test that fedpkg still works.

This is a recipe from bochecha to make it easy to test fedpkg in staging while not breaking it for regular use.

It goes in three steps:

1. Edit the file /etc/rpkg/fedpkg.conf and add to it:

[fedpkgstg]
lookaside = http://pkgs.stg.fedoraproject.org/repo/pkgs
lookasidehash = md5
lookaside_cgi = https://pkgs.stg.fedoraproject.org/repo/pkgs/upload.cgi
gitbaseurl = ssh://%(user)s@pkgs.stg.fedoraproject.org/%(module)s
anongiturl = git://pkgs.stg.fedoraproject.org/%(module)s
tracbaseurl = https://%(user)s:%(password)s@fedorahosted.org/rel-eng/login/xmlrpc
branchre = f\d$|f\d\d$|el\d$|olpc\d$|master$
kojiconfig = /etc/koji.conf
build_client = koji

2. Create a fedpkgstg (the name of the cli must be the same as the title of the section entered in the config file above)

sudo ln -s /usr/bin/fedpkg /usr/bin/fedpkgstg

3. call fedpkgstg to test staging and fedpkg to do your regular operation against the production instances



Thanks bochecha!

Thursday, July 23 2015

Introducing flask-multistatic

flask is a micro-web-framework in python. I have been using it for different projects for a couple of years now and I am quite happy with it.

I have been using it for some of the applications ran by the Fedora Infrastructure. Some of these applications could be re-used outside Fedora and this is of course something I would like to encourage.

One of the problem currently is that all those apps are branded for Fedora, so re-using them elsewhere can become complicated, this can be solved by theming. Theming means adjusting two components: templates and static files (images, css...).

Adjusting templates

jinja2 the template engine in flask already supports loading templates from two different directories. This allows to ask the application to load your own template first and if it does not find them, then it looks for it in the directory of the default theme.

Code wise it could look like this:

    # Use the templates
    # First we test the core templates directory
    # (contains stuff that users won't see)
    # Then we use the configured template directory
    import jinja2
    templ_loaders = []
    templ_loaders.append(APP.jinja_loader)
    # First load the templates from the THEME_FOLDER defined in the configuration
    templ_loaders.append(jinja2.FileSystemLoader(os.path.join(
        APP.root_path, APP.template_folder, APP.config['THEME_FOLDER'])))
    # Then load the other templates from the `default` theme folder
    templ_loaders.append(jinja2.FileSystemLoader(os.path.join(
        APP.root_path, APP.template_folder, 'default')))
    APP.jinja_loader = jinja2.ChoiceLoader(templ_loaders)
Adjusting static files

This is a little more tricky as static files are not templates and there is no logic in flask to allow overriding one or another depending on where it is located.

To solve this challenge, I wrote a small flask extension: flask-multistatic that basically allows flask to have the same behavior for static files as it does for templates.

Getting it to work is easy, at the top of your flask application do the imports:

    import flask
    from flask_multistatic import MultiStaticFlask

And make your flask flask application multistatic

    APP = flask.Flask(__name__)
    APP = MultiStaticFlask(APP)

You can then specify multiple folders where static files are located, for example:

    APP.static_folder = [
        os.path.join(APP.root_path, 'static', APP.config['THEME_FOLDER']),
        os.path.join(APP.root_path, 'static', 'default')
    ]

Note: The order the the folder matters, the last one should be the folder with all the usual files (ie: the default theme), the other ones are the folders for your specific theme(s).


Patrick Uiterwijk pointed to me that this method, although working is not ideal for production as it means that all the static files are served by the application instead of being served by the web-server. He therefore contributed an example apache configuration allowing to obtain the same behavior (override static files) but this time directly in apache!



So using flask-multistatic I will finally be able to make my apps entirely theme-able, allowing other projects to re-use them under their own brand.

Thursday, June 25 2015

EventSource/Server-Sent events: lesson learned

Recently I have been looking into Server-sent events, also known as SSE or eventsource.

The idea of server-sent events is to push notification to the browser, in a way it could be seen as a read-only web-socket (from the browser's view).

Implementing SSE is fairly easy code-wise, this article from html5rocks pretty much covers all the basics, but the principle is:

  • Add a little javascript to make your page connect to a specific URL on your server
  • Add a little more javascript to your page to react upon messages sent by the server



Server-side, things are also fairly easy but also need a little consideration:

  • You need to create basically a streaming server, broadcasting messages as they occurs or whenever you want.
  • The format is fairly simple: data: <your data> \n\n
  • You cannot run this server behind apache. The reason is simple, the browser keeps the connection open which means apache will keep the worker process running. So after opening a few pages, apache will reach its maximum number of worker processes running, thus ending up in a situation where it is waiting forever for an available worker process (ie: your apache server is not responding anymore).

So after running into the third point listed above, I moved the SSE server out of my flask application and into its own application, based on trollius (which is a backport of asyncio to python2), but any other async libraries would do (such as twisted or gevent).

After splitting the code out and testing it some more, I found that there is a limitation on the number of permanent connection a browser can make to the same domain. I found a couple of pages mentioning this issue, but the most useful resource for me was this old blog post from 2008: Roundup on Parallel Connections, which also provides the solution on how to go around this limitation: the limit is per domain, so if you set-up a bunch of CNAME sub-domain redirecting to the main domain, it will work for as many connection as you like :-) (note: this is also what github and facebook are using to implement web-socket support on as many tabs as you want).

The final step in this work is to not forget to set the HTTP Cross-Origin access control (CORS) policy in the response sent by your SSE server to control cross-site HTTP requests (which are known security risks).



So in the end, I went for the following architecture:

SSE_layout3.png

Two users are viewing the same page. One of them edits it (ie: sends a POST requests to the flask application), the web-application (here flask) processes the request as usual (changes something, updates the database...) and also queue a message in Redis information about the changes (and depending on what you want to do, specifying what has changed).

The SSE server is listening to redis, picks up the message and sends it to the browser of the two users. The javascript in the page displayed picks up the message, processes it and updates the page with the change.

This way, the first user updated the page and the second user had the changes displayed automatically and without having to reload the page.



Note: asyncio has a redis connector via asyncio-redis and trollius via trollius-redis.

Wednesday, June 17 2015

Contribute to pkgdb2

How to get started with contributing to pkgdb2.

Continue reading...

Thursday, May 7 2015

Check packages in anitya and pkgdb2 for monitoring

A little while ago I presented a script allowing to search for the packages of a specified user and see which are missing from either anitya or are not being monitored in pkgdb2.

This script however, only check someone's packages and someone time we want to check a number of packages at once, eventually, all the packages matching a given template.

This new script does just that:

 $ python pkgs_not_in_anitya_2.py 'drupal-*'
   drupal-service_links                 Monitor=False   Anitya=False
   drupal-calendar                      Monitor=False   Anitya=False
   drupal-cck                           Monitor=False   Anitya=False
   drupal-date                          Monitor=False   Anitya=False
   drupal-workspace                     Monitor=False   Anitya=False
   drupal-views                         Monitor=False   Anitya=False

If you are interested, feel free to use the script

Friday, April 3 2015

OpenSearch integration in pkgdb

One of the earliest feature request of pkgdb2 (that was present in pkgdb1) is the browser search integration.

This integration is based on the OpenSearch specifications and basically allows to use pkgdb as one of the search engine of your web browser just like you can use google, duckduckgo or wikipedia.

I recently found out this feature is not so well known, so I thought I would present it and explain how to set it up (screenshot are on Firefox).

1/ Go to https://admin.fedoraproject.org/pkgdb and click on the list of search engines at the top right.

2/ Select the entry Add "Fedora PkgDB2: Packages"

That's it you are done for the most important step :)

pkgdb_search_3.1.png

Now something which I do and find most useful is:

3/ Go to Manage Search Engines...

There, with the search engine pkgdb packages associate the keyword pkgdb

pkgdb_search_5.png

Now, you can use your url bar as usual but when you enter pkgdb <something> it will search this <something> in pkgdb directly. So for example, if you want to search for guake in pkgdb, you would type in your url bar pkgdb guake.

pkgdb_search_6.png

The bonus point is that since there is only one package with this name, you will be immediately redirected to its page.

This way, when you want to quickly find information about a package in pkgdb, you can get it from your browser in one simple step (eventually two if several package match the keyword you entered).

Final bonus point? To access pkgdb directly, enter in the url bar: "pkgdb " (with a space at the end), without a keyword, Firefox will bring you directly to the front page of the application.

Tuesday, March 24 2015

New package & new branch process

A little while ago, I blogged about the new package and new branch request processes.

These changes have been pushed to production yesterday.

What does this change for you, packager?

New package

If you already a packager, you know the current process to get packages into Fedora, you know that once your package has been approved on bugzilla, you have to file a SCM request.

With the new process, this step is no longer necessary. You can directly go to pkgdb and file the request there.

From there admins will review the package review on bugzilla and create the package in pkgdb (or refuse with an explanation).

New branch

If your package is already in Fedora, you can now directly request a new branch in pkgdb. Here there are multiple options

  • You have approveacls on the package (thus you are a package admin) and the request is regarding a new Fedora branch: The branch will be created automatically
  • You have approveacls on the package (thus you are a package admin) and the request is regarding a new EPEL branch: The request will be submitted to the pkgdb admins who will process it in their next run
  • You do not have approveacls on the package, then your request will be marked as: `Pending`, this means that the admins of the package have one week to react. They can either approve your request and by setting it to Awaiting Review, or they can decline the request (for which they must specify a reason). After this one week (or sooner if the package admin set the request to Awaiting Review) the pkgdb admin will process the request like they do with the other.

Note: Even with this new workflow, requests are still manually reviewed, so the requests will not necessarily be processed faster (but if it is easier for the admins, they may run it more often!).

What does this change for you, admins?

Hopefully, the process will be much simpler for you. In short

  • no need to log onto any system, you can do everything from your own machine and it should work out of the box
  • much more automated testing (including checking if a package is present in RHEL and on which arch for EPEL requests)
  • one tool to process the requests: pkgdb-admin distributed as part of packagedb-cli (aka: pkgdb-cli)



I hope this process makes sense to you and will make your life easier.

You are welcome to already use these processes, just let us know if you run into some problems, but for the time being both the old and the new processes are supported :-)

Wednesday, February 25 2015

Check your packages in pkgdb and anitya

The question was asked on the devel list earlier if there was a way to check all one's packages for their status in pkgdb and whether they are in anitya.

So I just cooked up quickly a small script to do just that, it retrieves all the packages in pkgdb that you are point of contact or co-maintainer and tells you if its monitoring flag is on or off in pkgdb and if it could be found in anitya.

For example for me (partial output):

$ python pkgs_not_in_anitya.py pingou
   * point of contact
     R-ALL                                Monitor=False   Anitya=False
     R-AnnotationDbi                      Monitor=False   Anitya=False
     ...
     guake                                Monitor=True    Anitya=True
     igraph                               Monitor=False   Anitya=False
     jdependency                          Monitor=True    Anitya=True
     libdivecomputer                      Monitor=True    Anitya=True
     metamorphose2                        Monitor=False   Anitya=False
     packagedb-cli                        Monitor=False   Anitya=False
     ...
   * co-maintained
     R-qtl                                Monitor=False   Anitya=False
     fedora-review                        Monitor=True    Anitya=True
     geany                                Monitor=True    Anitya=True
     geany-plugins                        Monitor=True    Anitya=True
     homebank                             Monitor=True    Anitya=True
     libfprint                            Monitor=True    Anitya=True
     ...

If you are interested, feel free to use the script

Tuesday, December 30 2014

Firefox private browsing directly

I use the private mode of firefox quite often, for example when I want to test an application while being authenticated in one windown and not authenticated in another window.

I also use this mode when I want to browse some commercial websites that I know do a lot of tracking (hey there amazon!).

Finally, my firefox always have few windows and a bunch of tabs open and when traveling quite often I want to open firefox quickly to check something but I do not want to have it coming with all its windows and tabs.

Until now, I used either different browser or midori that allows starting it directly in private mode in these situations.

So this morning I took myself by the hand and looked closer at fixing my system for my use-case:

The recipe turned out to be pretty simple:

1/ Get the firefox.desktop file:

 cp /usr/share/applications/firefox.desktop ~/firefox-private.desktop

2/ Adjust it as follow:

-Name=Firefox
+Name=Firefox (private browsing)
[...]
-Exec=firefox %u
+Exec=firefox -private-window %u

3/ Install the new desktop file:

3.1/ In /usr/share/applications/ for every users on the system

 sudo cp ~/firefox-private.desktop /usr/share/applications/

or

3.2/ In ~/.local/share/applications/ for your user only

 sudo cp /.local/share/applications/

With this trick, you can now start firefox in private browsing mode directly from the menu.

Wednesday, October 15 2014

Fedora-Infra: Did you know? The package information are now updated weekly in pkgdb2!

The package database pkgdb2 is the place where is managed the permission on the git repositories.

In simple words, it is the place managing the "who is allowed to do what on which package".

For each package, when they are created, the summary, the description and the upstream URL from the spec file are added to the database, which allow us to display the information on the page concerning the package. However, until two weeks ago, this information was never updated. That means that if you had an old package whose description had changed over time, pkgdb would present the one from the time the package was created in the database.

Nowadays, we have a script running on a weekly basis and updating the database. Currently, this script relies on the information provided by yum's metadata on the rawhide repo. This means that packages that are only present in EPEL or that are retired on rawhide but present in F21, will not have their information updated. This is likely something we will fix in the future though.

In the mean-time, you can now enjoy a pkgdb with summary and description information for almost all packages!

As an example, checkout the fedocal page, you can now see a link to the upstream website, a short summary and a little longer description of the project.

Also, to give you a little hint on the amount of updates we did:

The first time we ran the script:

 16638 packages checked
 15723 packages updated

Last week's run:

 16690 packages checked
 50 packages updated

Tuesday, December 10 2013

RDFa with rdflib, python and cnucnu web

source.png

Fooling around with RDFa and some projects

Continue reading...

Tuesday, May 22 2012

pyRdfa

source.png

A bit of fooling around with the pyRdfa library

Continue reading...

Friday, August 19 2011

Parrallel programming in python

source.png

A small example on basic parallel programming in python

Continue reading...

La programmation parallèle avec python

source.png

Un petit exemple basique de programmation parrallèle avec python

Continue reading...

- page 1 of 2