From lhrazky at redhat.com Mon Apr 4 15:55:03 2022 From: lhrazky at redhat.com (=?UTF-8?Q?Luk=C3=A1=C5=A1_Hr=C3=A1zk=C3=BD?=) Date: Mon, 04 Apr 2022 17:55:03 +0200 Subject: [Rpm-ecosystem] rpm logs to stdout/stderr when callback is set In-Reply-To: References: Message-ID: <6fff2e99463d5645ae4db7425e9b64393cd7751b.camel@redhat.com> On Thu, 2022-03-31 at 11:48 +0300, Panu Matilainen wrote: > On 3/25/22 14:47, Luk?? Hr?zk? wrote: > > > > (1) There are instances when the code writes to stdout/stderr directly > > instead via logging: > > > > One case at this spot: > > https://github.com/rpm-software-management/rpm/blob/a6913834d395d6544c2ba1578d6ebd594350b602/rpmio/rpmio.c#L1448 > > Right. That particular case should only really print out anything when > rpmio_debug is enabled, those stats really are not that interesting in > this day and age. Filed an issue: https://github.com/rpm-software-management/rpm/issues/1987 > > Another one I haven't managed to find in the code yet but it logs this: > > Header SHA256 digest: OK > > Header SHA1 digest: OK > > These come from > blob/a6913834d395d6544c2ba1578d6ebd594350b602/rpmio/rpmio.c#L1448 - they > are rpmlog()'ed so it's something else. Maybe they just happen before > you have the callback set up, eg keyring load would log these for any > gpg-pubkeys found. I think you've mistakenly pasted a part of my link above instead of the location of those messages. In the meantime I've realized those do go through the correct log calls, it's just that they are logged as a single multi-line message, hence the lines appear without the regular log prefix. They look like this: 2022-04-04T11:09:58+0200 [245631] TRACE [rpm] read h# 1228 Header V4 RSA/SHA256 Signature, key ID 9867c58f: OK Header SHA256 digest: OK Header SHA1 digest: OK Where the first line is absolutely indecipherable :) and there are apparently some digest verifications... Haven't created an issue yet as I don't know where to point. > > I think these are easy to fix and I'd like to get them fixed, as they'd > > interleave with our normal stdout/stderr output (in case DEBUG log mask > > is enabled). > > Please report as GH ticket(s), email somewhere on a mostly forgotten > mailing list is too easy to forget :D > > > There may be more of these but my cursory grepping of the repo hasn't > > found anything. > > > > > > (2a) The log callback is stored in a static variable, meaning rpm can't > > be used from two different places of the same process simultaneously. > > In dnf we have a context-style Base object and nothing is stopping the > > user to create multiple Bases with different configurations. And then > > potentially install via rpm into different installroots simultaneously. > > > > I don't think there's really a reasonable solution to this, I'm just > > bringing it up for awareness purposes. > > You couldn't install to different installroots anyway because chroot() > is process global. So while its (supposedly) okay to have multiple > transaction sets on different databases, rpmtsRun() will always need to > be serialized for very concrete reasons. Of course this isn't actually > enforced in the code, most of librpm simply isn't thread-safe, and even > the parts that are require treading carefully. > > Supporting multiple log contexts wouldn't be too hard as we already have > all the relevant stuff in one struct. It'd be mostly just adding > context-aware API variants alongside the current ones, plus some means > of setting eg the per-ts logger to use, a lot of churn to update all the > relevant rpmlog() calls to actually use it. Feel free to file a ticket > if this is something that seems important to dnf. > Okay, I think it would make things quite a bit easier for us. Now it seems we can work around it with some ugly locking, but if we could handle it via rpm API that'd be much better. But... :) although you're the expert here and I know next to nothing: Is it really feasible to introduce an API in RPM to set the context for all the possible calls to librpm? Seems to me there must be quite a lot of those. Haven't created an issue yet, but I can once you confirm it's reasonable (I don't want to create one if it never gets done anyway due to not being practically feasible). > > (2b) Somewhat related, due to the above limitation I'm setting the rpm > > log callback only just before we start working with an rpm transaction > > and lock this with a static variable lock so that the log messages > > don't get misdirected. The API returns the previous log callback so it > > could be reset after the lock is released, but the API doesn't provide > > a way to access the data of the log callback, so restoring the callback > > would just break horribly when there's a different sort of data than > > what it expects. > > > > I don't think we necessarily need to restore the callback to its > > previous value in dnf 5 so again just bringing this up for awareness. > > This could be fixed by adding some API. > > Yup, this is a flaw shared by more than one callback APIs in rpm. Please > file a ticket so it gets tracked. Filed an issue: https://github.com/rpm-software-management/rpm/issues/1988 Thanks a lot! Lukas From pmatilai at redhat.com Wed Apr 13 13:00:08 2022 From: pmatilai at redhat.com (Panu Matilainen) Date: Wed, 13 Apr 2022 16:00:08 +0300 Subject: [Rpm-ecosystem] New RPM community venue In-Reply-To: <9437c6cb-ae92-da62-683c-1bcd032243a0@redhat.com> References: <9437c6cb-ae92-da62-683c-1bcd032243a0@redhat.com> Message-ID: <1e10e94a-af98-705a-9ce3-8c047b071a80@redhat.com> As of today, we're opening up the GitHub Discussions forum as a new venue for community interaction: https://github.com/rpm-software-management/rpm/discussions Why, you ask, when we have all these mailing lists? The sad fact is that the mailing lists are all but dead, to the point that even us maintainers miss the rare post on them, leading them even more dead because few people like talking to themselves. Yet, clearly there is a need for a place to ask questions and discuss various aspects of rpm and its future, and based on evidence people are more inclined to file a ticket to do this rather than post on a mailing list. That, or remain silent. Neither is a particularly good outcome. We hate the potential vendor lock-in as much as anybody, so these discussions will always be mirrored to rpm-maint mailing list along with the ticket and PR notifications. Other than that, we'll see how it goes. On behalf of rpm-team, - Panu -