Le blog de pingou

To content | To menu | To search

Tag - Fedora-planet

Entries feed

Thursday, May 7 2015

Check packages in anitya and pkgdb2 for monitoring

A little while ago I presented a script allowing to search for the packages of a specified user and see which are missing from either anitya or are not being monitored in pkgdb2.

This script however, only check someone's packages and someone time we want to check a number of packages at once, eventually, all the packages matching a given template.

This new script does just that:

 $ python pkgs_not_in_anitya_2.py 'drupal-*'
   drupal-service_links                 Monitor=False   Anitya=False
   drupal-calendar                      Monitor=False   Anitya=False
   drupal-cck                           Monitor=False   Anitya=False
   drupal-date                          Monitor=False   Anitya=False
   drupal-workspace                     Monitor=False   Anitya=False
   drupal-views                         Monitor=False   Anitya=False

If you are interested, feel free to use the script

Wednesday, May 6 2015

Flock 2015: Your vote has been recorded. Thank you!

The election to select the talks for flock 2015 has started yesterday.

Anyone having signed the FPCA and being in one more group can participate to this election and help selecting the most interesting talks to be held at flock 2015 in Rochester (NY). Some of the talks submitted there look really interesting, I am looking forward seeing the agenda and I hope the ones I want to see will not conflict too much :-)

This year the election is using the simplified range voting approach. The principle is the same as for the classical range voting, but instead of having the possibility to score each candidate between 0 and X (X being the number of candidates, which is 132 for this election), you have the possibility to score each candidate between 0 and 3.

You can of course make your own scale but I went for something along the lines of:

  • 0: not really interested by this talk
  • 1: can be interesting, not sure
  • 2: looks like an interesting talk
  • 3: I really want to see this talk



And you, did you vote?

Tuesday, April 14 2015

FOSS Emoji

Just wanted to make a quick note here.

Today looking for emoji for pagure I ran into http://emojione.com/. This project provides Free and Open Source emoji icons that can thus be re-used on other projects.

Just heads up to those looking for a FOSS emoji database/project and big thanks to the developers and artists behind this awesome project!

Friday, April 3 2015

OpenSearch integration in pkgdb

One of the earliest feature request of pkgdb2 (that was present in pkgdb1) is the browser search integration.

This integration is based on the OpenSearch specifications and basically allows to use pkgdb as one of the search engine of your web browser just like you can use google, duckduckgo or wikipedia.

I recently found out this feature is not so well known, so I thought I would present it and explain how to set it up (screenshot are on Firefox).

1/ Go to https://admin.fedoraproject.org/pkgdb and click on the list of search engines at the top right.

2/ Select the entry Add "Fedora PkgDB2: Packages"

That's it you are done for the most important step :)

pkgdb_search_3.1.png

Now something which I do and find most useful is:

3/ Go to Manage Search Engines...

There, with the search engine pkgdb packages associate the keyword pkgdb

pkgdb_search_5.png

Now, you can use your url bar as usual but when you enter pkgdb <something> it will search this <something> in pkgdb directly. So for example, if you want to search for guake in pkgdb, you would type in your url bar pkgdb guake.

pkgdb_search_6.png

The bonus point is that since there is only one package with this name, you will be immediately redirected to its page.

This way, when you want to quickly find information about a package in pkgdb, you can get it from your browser in one simple step (eventually two if several package match the keyword you entered).

Final bonus point? To access pkgdb directly, enter in the url bar: "pkgdb " (with a space at the end), without a keyword, Firefox will bring you directly to the front page of the application.

Wednesday, March 25 2015

Progit is dead, long live pagure

You may have heard of a little pet project I have been working on recently, I called it progit but there already a more well-known project named progit (the pro git book).

So, after long deliberations, we decided to rename the project: pagure.

What is Pagure?

Pagure is a small git-centered forge project. You can host your code, your documentation, your tickets and have people contribute to the project by forking it and opening pull-requests.

All the information about the project is hosted in different git repositories, the code of course, but also the documentation as well as the metadata (discussion) of tickets and pull-requests. The idea being that one could host a project in multiples instances of pagure and keep them in sync.

What about the name?

Pagure is the generic (French) name for animals of the Paguroidea family which includes the well known Pagurus bernhardus. This little crab moves from shell to shell as it grows up. I found it was a nice analogy with this forge where project can move from place to place.

Where can I see it?

Pagure is still under development and pretty much changes every day. However, you can already see it, test it and poke at it via the dev instance we have running.

As you will see, pagure itself is being developed there, so feel free to open a ticket if pagure does not do something you would like (or does something you do not like).

Tuesday, March 24 2015

New package & new branch process

A little while ago, I blogged about the new package and new branch request processes.

These changes have been pushed to production yesterday.

What does this change for you, packager?

New package

If you already a packager, you know the current process to get packages into Fedora, you know that once your package has been approved on bugzilla, you have to file a SCM request.

With the new process, this step is no longer necessary. You can directly go to pkgdb and file the request there.

From there admins will review the package review on bugzilla and create the package in pkgdb (or refuse with an explanation).

New branch

If your package is already in Fedora, you can now directly request a new branch in pkgdb. Here there are multiple options

  • You have approveacls on the package (thus you are a package admin) and the request is regarding a new Fedora branch: The branch will be created automatically
  • You have approveacls on the package (thus you are a package admin) and the request is regarding a new EPEL branch: The request will be submitted to the pkgdb admins who will process it in their next run
  • You do not have approveacls on the package, then your request will be marked as: `Pending`, this means that the admins of the package have one week to react. They can either approve your request and by setting it to Awaiting Review, or they can decline the request (for which they must specify a reason). After this one week (or sooner if the package admin set the request to Awaiting Review) the pkgdb admin will process the request like they do with the other.

Note: Even with this new workflow, requests are still manually reviewed, so the requests will not necessarily be processed faster (but if it is easier for the admins, they may run it more often!).

What does this change for you, admins?

Hopefully, the process will be much simpler for you. In short

  • no need to log onto any system, you can do everything from your own machine and it should work out of the box
  • much more automated testing (including checking if a package is present in RHEL and on which arch for EPEL requests)
  • one tool to process the requests: pkgdb-admin distributed as part of packagedb-cli (aka: pkgdb-cli)



I hope this process makes sense to you and will make your life easier.

You are welcome to already use these processes, just let us know if you run into some problems, but for the time being both the old and the new processes are supported :-)

Wednesday, February 25 2015

Check your packages in pkgdb and anitya

The question was asked on the devel list earlier if there was a way to check all one's packages for their status in pkgdb and whether they are in anitya.

So I just cooked up quickly a small script to do just that, it retrieves all the packages in pkgdb that you are point of contact or co-maintainer and tells you if its monitoring flag is on or off in pkgdb and if it could be found in anitya.

For example for me (partial output):

$ python pkgs_not_in_anitya.py pingou
   * point of contact
     R-ALL                                Monitor=False   Anitya=False
     R-AnnotationDbi                      Monitor=False   Anitya=False
     ...
     guake                                Monitor=True    Anitya=True
     igraph                               Monitor=False   Anitya=False
     jdependency                          Monitor=True    Anitya=True
     libdivecomputer                      Monitor=True    Anitya=True
     metamorphose2                        Monitor=False   Anitya=False
     packagedb-cli                        Monitor=False   Anitya=False
     ...
   * co-maintained
     R-qtl                                Monitor=False   Anitya=False
     fedora-review                        Monitor=True    Anitya=True
     geany                                Monitor=True    Anitya=True
     geany-plugins                        Monitor=True    Anitya=True
     homebank                             Monitor=True    Anitya=True
     libfprint                            Monitor=True    Anitya=True
     ...

If you are interested, feel free to use the script

About SourceForge and anitya

There are a couple of reports (1 and 2) about anitya not doing its job properly for projects hosted on sourceforge.net.

So here is a summary of the situation:

A project X on sourceforge.net, for example with a homepage sourceforge.net/projects/X, releases multiples tarball named, X-1.2.tar.gz, libX-0.3.tar.gz and libY-2.0.tar.gz.

So how to model this.

The original approach taken was: the project is named X, so in anitya we should name it X and then the sourceforge backend in anitya allows to specify a Sourceforge project allowing to search X, libX or libY in the rss feed of the X project on SourceForge. Problem: when adding libX or libY on anitya, the project and homepage are all X and sourceforge.net/projects/X, while this is actually used to make project uniques in anitya (in other words, adding libX and libY won't be allowed).

So this is the current situation and as you can see, it has problems (which explains the two issues reported).


What are the potential solutions?

1/ Extend the unique constraint

We could include the tarball name to search for in the unique constraint, which would then change from: name+homepage to name+homepage+tarball

2/ Invert the use of name and tarball

Instead of having the project name be X with a tarball name libX, we could make the project be libX and the tarball be X.

This sounds quite nice and easy, but looking at the project currently in anitya's database, I found projects like:

        name         |                          homepage                           |                  tarball
                     +                                                             +
 linuxwacom          | http://sf.net/projects/linuxwacom/                          | xf86-input-wacom
 brutalchess (alpha) | http://sourceforge.net/p/brutalchess                        | brutalchess-alpha
 chemical-mime       | http://sourceforge.net/projects/chemical-mime               | chemical-mime-data

So for these, the tarball name would become the project name and they would be pretty ugly.

I am not quite sure what is the best approach for this.

What do you think?

Thursday, January 22 2015

New branch request process

A little while ago I blogged about a new process to request a new branch on an existing package.

The code to support this change is now under review but I thought I should document the workflow a little bit, so here is how I tried to design things:

pkgdb_new_branch_flow_2.png

Ideally, when the branch is approved and created in pkgdb by the admin, pkgdb will send a message on fedmsg, message that will be seen by a fedmsg-consummer that will automatically update the git repos within, say 2 minutes. That last part if almost ready and hopefully will be running soon.

Tuesday, December 30 2014

Firefox private browsing directly

I use the private mode of firefox quite often, for example when I want to test an application while being authenticated in one windown and not authenticated in another window.

I also use this mode when I want to browse some commercial websites that I know do a lot of tracking (hey there amazon!).

Finally, my firefox always have few windows and a bunch of tabs open and when traveling quite often I want to open firefox quickly to check something but I do not want to have it coming with all its windows and tabs.

Until now, I used either different browser or midori that allows starting it directly in private mode in these situations.

So this morning I took myself by the hand and looked closer at fixing my system for my use-case:

The recipe turned out to be pretty simple:

1/ Get the firefox.desktop file:

 cp /usr/share/applications/firefox.desktop ~/firefox-private.desktop

2/ Adjust it as follow:

-Name=Firefox
+Name=Firefox (private browsing)
[...]
-Exec=firefox %u
+Exec=firefox -private-window %u

3/ Install the new desktop file:

3.1/ In /usr/share/applications/ for every users on the system

 sudo cp ~/firefox-private.desktop /usr/share/applications/

or

3.2/ In ~/.local/share/applications/ for your user only

 sudo cp /.local/share/applications/

With this trick, you can now start firefox in private browsing mode directly from the menu.

Monday, December 15 2014

Fedora 21 release day, 7 days later

Last week Tuesday, we released the 21st version of Fedora. The morning of the release we noticed that the load of some of the proxies was running very high. So we started checking our monitoring for the incoming traffic. A week later, this is an overview of the traffic on our proxies over the last ten days (so 3 days before the release and 7 days since).

collectd_f21.png

The third one is quite impressive and looking at more of these graphs we can see a similar pattern where the traffic for F21 release really bumped on release day and the following two days and is now slowly recovering.

If you want to see more of these pretty pictures/graphs, check our collectd

Friday, December 12 2014

Infra FAD 2014 - Part 2: Ansible

Part 1: MirrorManager

It has been two days since I came back and others have already reported about our progress (Ralph, kevin day 0 & 1, kevin day 2, kevin day 3, kevin day 4 and finally, kevin day 5) but I wanted to came back on it as well :)

So seven of us from the Fedora Infrastructure team meet up in Raleigh in the Red Hat office there. We had Matt Domsch for the first couple of days to help us understanding and apprehending how MirrorManager works (see Part 1).

The second part of the FAD was dedicated around moving forward the infrastructure task of moving away from puppet in favor of Ansible. This is led to the most productive week we ever had on our Ansible git repo. I have been able to start porting things like varnish or haproxy while Ralph was doing the heavy lifting on working on porting the proxies themselves. Patrick worked on porting the nameservers and managed to actually re-install them using Ansible (and moving them to RHEL7 while at it). Smooge has been poking at the setup for fedorapeople.

With all that we also managed to get MirrorManager2 in staging and Luke wrote some awesome unit-tests for mirrorlist which already allowed us to make still some small optimizations.

All in all, I have to say that I have had a great time. I have the feeling that we achieved a lot of what we wanted to do and that we have been really efficient at it :-)

To remain critical about the organization. I think I agree with Ralph that for the next FAD we should be extra-careful to really organise some sort of social event. We have had strange hours (having lunch at 3pm or even 5pm once) and the one afternoon where we said we would take off we ended up working... Being involved in the organization while not on site makes it difficult to find something nice for the social event, but I think we/I should have tried harder to find something nice to do.

Anyway, like I said, I have a great time and I'm thankfull to everyone that have been able to make it to Raleigh, to the OSAS team at Red Hat that funded most of this FAD and to Ansible for inviting us for dinner on Friday evening :-)

Thanks a bunch folks!

DSC_0026.1.JPG

Saturday, December 6 2014

Infra FAD 2014 - Part 1: MirrorManager

The last two days have been quite busy for the Fedora infrastructure team. Most of us are indeed meeting up in Raleigh, in the Red Hat tower down-town and together with Matt Domsch, the original developer of MirrorManager, we have been on MirrorManager2.

It was really great for us that Matt could join. MirrorManager is pretty straight forward in theory but also full of small details which can make it a hard to understand fully. Having Matt with us allowed us to ask him as many questions as we wanted. We were also able to go with him through all the utility scripts and all the crons that make MirrorManager working.

The good surprise was that a significant part of the code was already converted for MirrorManager2, but we still found some crons and scripts that needed to be ported.

So after spending most of the first day on getting to understand and know more about the inner processes of MirrorManager, we were able to start working on porting the missing parts to MirrorManager2.

We also took the opportunity to discuss with Matt, Luke and David how things should look like for atomic and Ralph was able to make the first changes to make this a reality :-)

So yesterday evening we had all the crons/scripts (but one in fact that one isn't needed for MM2) converted to MirrorManager2 \ó/

That was a good point to stop and go quickly to the Red Hat Christmas party before meeting Greg who invited us for a dinner sponsored by Ansible. We had a really nice meal and evening, thanks Greg, thanks Ansible!

Today started the second part of the FAD: Ansible, but more on that later ;-)

Thursday, November 27 2014

Python multiprocessing and queue

Every once in a while I want to run a program in parallel but gather its output in a single process so that I do not have concurrent accesses (think for example, several process computing something and storing the output in a file or in a database). I could use locks for this but I figure I could also use a queue.

My problem is that I always forget how I do it and always need to search for it when I want to do it again :-) So for you as much as for me here is an example:

# -*- coding: utf-8 -*-

import itertools
from multiprocessing import Pool, Manager


def do_something(arg):
    """ This function does something important in parallel but where we
    want to centralize the output, thus using the queue
    """
    data, myq = arg
    print data
    myq.put(data)
    myq.task_done()


data = range(100)
m = Manager()
q = m.Queue()
p = Pool(5)
p.map(do_something, itertools.product(data, [q]))


with open('output', 'w') as stream:
    while q.qsize():
        print q.qsize()
        item = q.get()
        print item
        stream.write('%s\n' % item)
    q.join()

There are probably other/better ways to do this but that's a start :-)

Wednesday, October 15 2014

Fedora-Infra: Did you know? The package information are now updated weekly in pkgdb2!

The package database pkgdb2 is the place where is managed the permission on the git repositories.

In simple words, it is the place managing the "who is allowed to do what on which package".

For each package, when they are created, the summary, the description and the upstream URL from the spec file are added to the database, which allow us to display the information on the page concerning the package. However, until two weeks ago, this information was never updated. That means that if you had an old package whose description had changed over time, pkgdb would present the one from the time the package was created in the database.

Nowadays, we have a script running on a weekly basis and updating the database. Currently, this script relies on the information provided by yum's metadata on the rawhide repo. This means that packages that are only present in EPEL or that are retired on rawhide but present in F21, will not have their information updated. This is likely something we will fix in the future though.

In the mean-time, you can now enjoy a pkgdb with summary and description information for almost all packages!

As an example, checkout the fedocal page, you can now see a link to the upstream website, a short summary and a little longer description of the project.

Also, to give you a little hint on the amount of updates we did:

The first time we ran the script:

 16638 packages checked
 15723 packages updated

Last week's run:

 16690 packages checked
 50 packages updated

Saturday, August 9 2014

Flock 2014 - day 1 to 3

Today is the fourth day of flock. As usual the last three days have been really nice. I got to go to a number of interesting conferences and could even present a couple of project that I am or will be working on.

I assisted to the conference from Luke on how pushing updates in Fedora will look like in the coming months. Bodhi 2 is the new version of the application we use to manage our updates in Fedora. Luke and others have been working hard on it but the work they did is really impressive! Bodhi 2 looks better from all angles, UI, Infra, Workflow. Apparently the timeline is to get it deployed before the end of the year but after the release of Fedora 21, so stay tuned it's arriving ;-)

I have been able to assist on the presentation about python 3 in Fedora. I must say that this is looking promising and there are some new shiny things in python 3 that I am already looking forward for (most notably the possibility to have keyword only arguments in functions, this is going to be sweet).

On Thursday, I gave a presentation about the future Fedora Review Server (we couldn't find a better name for it and people seemed to like it :-)), more on that later.

The same day, Adimania presented a little bit his feeling and the state of things with regards to Ansible in the Fedora Infrastructure. I think it was a nice summary of why we are moving and what we like about Ansible.

Thursday afternoon, I went to the talk about NoSQL in Fedora Infrastructure. More than a state of things, it was a plead that we should consider and keep in mind the NoSQL technologies for the Infra and not fear using them where they make sense. Yograterol did a nice job presenting the different NoSQL technologies and clearly we should consider them where it makes sense. Thinking further about it with Raplh we thought that using MongoDB for datagrepper might be interesting, we should benchmark this :)

Finally yesterday I was able to present a little project I have been working on for a little bit progit, I will blog about this in the near future so keep in touch ;-)

Then I attended the talk from Kevin about the present and future of the Fedora infrastructure. This was a good overview of the different irons we have in the fire at the moment and those that near the fire aren't yet too hot. One thing is sure, I am really looking forward having our bugzilla hooked up on fedmsg!

The joint session on Fedora.next chaired by our dear FPL was also quite interesting and provided a very nice overview of what the different working group are currently up to. It was nice to see things moving forward, if some parts are still a little shady, I guess it won't remain this way for long anymore.

Yesterday afternoon, was a session on EPEL.next. There are still a number of concerns and questions about how things could or should be in EPEL. Some things are good and some could be improved, there are some generic idea (such as having a new repo: EPIC which would contain more rapidly evolving software or more recent version of software compared to what is currently in EPEL), but there again the devil is in the details and there will need to be some more thoughts and work before we can see this live.

I guess this is it for the talks, I attended a few more but I can't possibly detail them all here :-)

Next time, more info on what we actually got done during these few days!

Friday, July 25 2014

The Joy of timezones

Today, I was looking at fedocal as I found out it could not import its own iCal files.

Well, to be exact, the import worked fine but then it was not able to display the meeting. The source of the issue is that the iCal output is relying on timezone name such as EDT or CEST while fedocal actually expects timezone to be of type US/Eastern or Europe/Paris.

So I went looking for a way to convert the acronyms to real timezone.

I finally found out the following script:

import pytz
from datetime import datetime

timezone_lookup = dict()
for tz in pytz.common_timezones:
    name = pytz.timezone(tz).localize(datetime.now()).tzname()
    if key in timezone_lookup:
        timezone_lookup[name].append(tz)
    else:
        timezone_lookup[name] = [tz]

for key in sorted(timezone_lookup):
    print key, timezone_lookup[key]

Which led me to discover things like:

  IST ['Asia/Colombo', 'Asia/Kolkata', 'Europe/Dublin']

The Indian Standard Time and the Irish Standard Time have the same acronym

but also:

  EST ['America/Atikokan', 'America/Cayman', 'America/Jamaica', 'America/Panama', 'Australia/Brisbane', 'Australia/Currie', 'Australia/Hobart', 'Australia/Lindeman', 'Australia/Melbourne', 'Australia/Sydney']

So how to handle this?

The only solution I could came up with is relying on both the acronym and the offset between that timezone and UTC

Adjusted script:

import pytz
from datetime import datetime

timezone_lookup = dict()
for tz in pytz.common_timezones:
    name = pytz.timezone(tz).localize(datetime.now()).tzname()
    offset = pytz.timezone(tz).localize(datetime.now()).utcoffset()
    key = (name, offset)
    if key in timezone_lookup:
        timezone_lookup[key].append(tz)
    else:
        timezone_lookup[key] = [tz]

for key in sorted(timezone_lookup):
    print key, timezone_lookup[key]

And corresponding output:

...
('EST', datetime.timedelta(-1, 68400)) ['America/Atikokan', 'America/Cayman', 'America/Jamaica', 'America/Panama']
('EST', datetime.timedelta(0, 36000)) ['Australia/Brisbane', 'Australia/Currie', 'Australia/Hobart', 'Australia/Lindeman', 'Australia/Melbourne', 'Australia/Sydney']
...
('IST', datetime.timedelta(0, 3600)) ['Europe/Dublin']
('IST', datetime.timedelta(0, 19800)) ['Asia/Colombo', 'Asia/Kolkata']
...

So much fun...

Wednesday, July 23 2014

New package, new branch, new workflow?

If you are a Fedora packager, you are probably aware of the new pkgdb.

One question which has been raised by this new version is: should we change the process to request new branches or integrate new packages in the distribution.

The discussion has occurred on the rel-eng mailing list but I'm gonna try to summarize here what the process is today and what it might become in the coming weeks.

Current new-package procedure:
  1. packager opens a review-request on bugzilla
  2. reviewer sets the fedora-review flag to ?
  3. reviewer does the review
  4. reviewer sets the fedora-review flag to +
  5. packager creates the scm-request and set fedora-cvs flag to ?
  6. cvsadmin checks the review (check reviewer is a packager)
  7. cvsadmin processes the scm-request (create git repo, create package in pkgdb)
  8. cvsadmin sets fedora-cvs flag to +
New procedure
  1. packager opens a review-request on bugzilla
  2. reviewer sets the fedora-review flag to ?
  3. reviewer does the review
  4. reviewer sets the fedora-review flag to +
  5. packager goes to pkgdb2 to request new package (specifying: package name, package summary, package branches, bugzilla ticket)
  6. requests added to the scm admin queue
  7. cvsadmin checks the review (check reviewer is a packager¹)
  8. cvsadmin approves the creation of the package in pkgdb
  9. package creation is broadcasted on fedmsg
  10. fedora-cvs flag set to + on bugzilla
  11. git adjusted automatically

Keeping the fedora-cvs flag in bugzilla allows to perform a regular (daily?) check that there are no fedora-review flag set as + that have been approved in pkgdb and whose fedmsg message hasn't been processed.

Looking at the number, it looks like there are more steps on the new procedure but eventually, most of them can be automated.

New branch process

For new branches, the process would be very similar:

  1. packager goes to pkgdb2 to request new branch
  2. requests added to the scm admin queue
  3. cvsadmin checks the request (requester is a packager...)
  4. cvsadmin approves the creation of the branch in pkgdb
  5. branch creation is broadcasted on fedmsg
  6. git adjusted automatically

Tuesday, July 8 2014

1 year

Today is the first anniversary of the day we said good-bye to a good friend.

There has been a number of tributes in the couple of months following his disappearance, and there are still some once in a while. Personally, I hardly spend a week without remembering him or asking myself "What would Seth say?".

Good bye old friend, may your wisdom lead us.

Thursday, June 26 2014

Faitout, 1000 sessions

A while back, I introduced faitout on this blog.

Since then I have been using it to tests most if not all the project I work on. I basically use the following set-up:

DB_PATH = 'sqlite:///:memory:'
FAITOUT_URL = 'http://209.132.184.152/faitout/'
try:
    import requests
    req = requests.get('%s/new' % FAITOUT_URL)
    if req.status_code == 200:
        DB_PATH = req.text
        print 'Using faitout at: %s' % DB_PATH
except:
    pass

This way, if I have network, the tests are run with faitout and thus against a real postgresql database while if I do not have network, they run against a sqlite in memory database.

This set-up allows me to work offline and still be easily able to run all the unit-tests as I change the code.

What the point of this blog was actually more to announce the fact that despite it's limited spread (only 25 different IP addresses have requested sessions), the tool is used and it has already reached the 1,000 sessions created (and dropped) in less than a year.



If you're not using it, I am inviting you to have a look at it, I find it marvelous in combination with Jenkins and it does help finding bugs in your code.

If you are using it, congrats and keep up the good work!!

- page 2 of 6 -