[Rpm-ecosystem] Testing the Dependency Chain

Radek Holy rholy at redhat.com
Fri Sep 11 10:40:14 UTC 2015


Sorry for late reply. I was on vacation and then I needed to finish something as soon as possible.


----- Original Message -----
> From: "Pavel Odvody" <podvody at redhat.com>
> To: rpm-ecosystem at lists.rpm.org
> Sent: Monday, August 24, 2015 12:57:21 PM
> Subject: Re: [Rpm-ecosystem] Testing the Dependency Chain
> 
> On Fri, 2015-08-21 at 15:03 -0400, Radek Holy wrote:
> > 
> > ----- Original Message -----
> > > From: "Pavel Odvody" <podvody at redhat.com>
> > > To: rpm-ecosystem at lists.rpm.org
> > > Sent: Friday, August 21, 2015 6:21:37 PM
> > > Subject: [Rpm-ecosystem] Testing the Dependency Chain
> > > 
> > > Hello,
> > > 
> > > I've setup a test suite for cozy testing of various setups that the
> > > dependency system in RPM provides. Please refer to [1] for further
> > > information about implementation details of the test suite.
> > > 
> > > =Test results and how to interpret them
> > > The fact that test succeeded/failed needs to be properly interpreted as
> > > the test context might be a bit different in each particular case so the
> > > expectations about the behavior are properly aligned with what happens
> > > under the hood. One example is the failed case #2 below, where it can be
> > > noted that the test failed because the expectation is that Recommends
> > > are not installed by default, and by 'failing' the test proved that
> > > Recommends actually *are* installed by default.
> > > 
> > > Generally I'd call this "a game of subtle side effects", where each part
> > > of the dependency chain adds something to the resulting equation and
> > > until we compile some matrix/chart with all the influences that each of
> > > those components have on the whole we're swimming in a gray area, since
> > > we have no idea what the result is going to be (hunch does not count).
> > > 
> > > What I'd like to propose as a solution is test-driven formalization,
> > > where we write tests to test each formal aspect of the dependency chain
> > > until we get 100% coverage. With that we can generate awesome
> > > documentation for package maintainers so that they don't need to fear it
> > > like a voodoo.
> > > 
> > > =How to test?
> > > First we need to build the container image that will serve as a basis
> > > for our test environment.
> > >   
> > >   $ git clone https://github.com/shaded-enmity/richdeps-docker
> > >   $ cd richdeps-docker/
> > >   $ docker build -t richdeps:1.0.0 .
> > > 
> > > The git repo already contains one testing repository and a few test
> > > cases.
> > > So let's try the first one:
> > > 
> > >   $ ./test-launcher.py tests/test1.json
> > >   Loading test configuration from:
> > >    tests/test1.json
> > >   Starting container:
> > >    docker run -i -v
> > >    /home/podvody/Repos/richdeps-docker/repos/test-1:/repo:Z
> > >    richdeps:1.0.0
> > > 
> > >   OK
> > > 
> > > The test succeeded as expected, we installed a package TestA which
> > > requires (TestB | TestC), and recommends TestC -- packages TestA and
> > > TestC were installed.
> > > 
> > > The output of the second test is quite lengthy since it fails:
> > > 
> > >   $ ./test-launcher.py tests/test2.json
> > >   (see output at: http://pastebin.com/hb61xrNX)
> > > 
> > > The test failed since we installed TestB before TestA, and
> > > installing TestA then also installed TestC, which was only in
> > > Recommended, and the (TestB | TestC) requirement was already satisfied.
> > > 
> > > =Repos and packages
> > > All repos and packages we created manually, I'm currently looking into
> > > using rpmfluff[2] to make the test suite somewhat automated.
> > > 
> > > 
> > > Suggestions, discussion, pull requests with test cases and repos are
> > > very welcome :)
> > > 
> > > [1]: https://github.com/shaded-enmity/richdeps-docker
> > > [2]: https://fedorahosted.org/rpmfluff/
> > > --
> > > Pavel Odvody <podvody at redhat.com>
> > > Software Engineer - EMEA ENG Developer Experience
> > > 5EC1 95C1 8E08 5BD9 9BBF 9241 3AFA 3A66 024F F68D
> > > Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno
> > 
> > 
> > 1) I think that there is no need to invent yet another testing framework.
> > Personally, I love the current trend of behaviour-driven development. The
> > tests are very readable and can serve as both a simple documentation as
> > well as a test suite. I mean, some of these BDD frameworks seems to be
> > appropriate for this project. But there are also very good TDD frameworks.
> > 
> 
> The code bootstrapping the containers and running the test specification
> is around 5kb, I'd hardly call that a testing framework, as much of the
> code would need to be integrated into the BDD suite anyway. Eg.

Well, I meant rather the "test-suite" script. Yeah, it was simple. But I believe that with more and more experience it would be more and more complicated.

> @behave("packages {} are installed")
> def pkgsAreInstalled(a, b, c):
>    ...
> 
> @behave("packages {} are not installed")
> def pkgsAreMissing(a, b, c):
>    ...
> 
> Perhaps if you can provide an example of one of the tests rewritten for
> Behave or something like that.
> 
> > 2) I think that it should be containerization-technology-agnostic as well.
> > E.g. in case of rich dependencies, using chroots seems to be sufficient.
> > My concern is that I'm not sure how much is Docker supported on hosted
> > continuous integration services where these tests should be run.
> > 
> Chroot is enough if you have everything else ready already, but here we
> need to also deliver Rich deps enabled tooling.
> I think that any CI worth it's salt should allow you to do pretty much
> anything in the VM.
> 
> > Anyway, these are just nitpicks. I think that it is very valuable and thank
> > you for working on it.
> > 
> > Also initially, I was afraid that separated tests may become outdated very
> > soon. But the more I think about it the more I think it's a good idea
> > since they should really be shared between all the package managers. Just
> > make sure that these tests are part of the continuous integration process
> > of the desired components.
> 
> In the first phase we're essentially *defining* what are we going to
> test in the future, and since the behavior of each particular pkg
> manager should be the same across all pkg managers, these tests should
> never
> become outdated per se.

Sure. But after the months of DNF development I found out that the final word always has the particular linux distribution, i.e. FPC. We can define/plan/think whatever we want but in the end, all the tools have to follow the guidelines produced by these committees. What I mean is that they should define/update the tests or at least they should know about them, otherwise they *may* become outdated. Especially when it comes to some corner cases. Although I admit it's not very likely.


> Thanks!
> 
> --
> Pavel Odvody <podvody at redhat.com>
> Software Engineer - EMEA ENG Developer Experience
> 5EC1 95C1 8E08 5BD9 9BBF 9241 3AFA 3A66 024F F68D
> Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno
> 
> 
> _______________________________________________
> Rpm-ecosystem mailing list
> Rpm-ecosystem at lists.rpm.org
> http://lists.rpm.org/mailman/listinfo/rpm-ecosystem
> 

-- 
Radek Holý
Associate Software Engineer
Software Management Team
Red Hat Czech


More information about the Rpm-ecosystem mailing list