So I've obviously started pulling stuff for the merge window, and one
of the things I noticed with Greg doing it for the last few weeks was
that he has this habit (or automation) to send Ack emails when he
pulls.
In fact, I reacted to them not being there when he sent himself his
fake pull messages. Because he didn't then send himself an ack for
having pulled it ;(
And I actually went into this saying "I'll try to do the same".
But after having actually started doing the pulls, I notice how it
doesn't work well with my traditional workflow, and so I haven't been
doing it after all.
In particular, the issue is that after each pull, I do a build test
before the pull is really "final", and while that build test is
ongoing (which takes anything from a few minutes to over an hour when
I'm on the road and using my laptop), I go on and look at the *next*
pull (or one of the other pending ones).
So by the time the build test has finished, the original pull request
is already long gone - archived and done - and I have moved on.
End result: answering the pull request is somewhat inconvenient to my
flow, which is why I haven't done it.
In contrast, this email is written "after the fact", just scripting
"who did I pull for and then push out" by just looking at the git
tree. Which sucks, because it means that I don't actually answer the
original email at all, and thus lose any cc's for other people or
mailing lists. That would literally be done better by simple
automation.
So I've got a few options:
- just don't do it
- acking the pull request before it's validated and finalized.
- starting the reply when doing the pull, leaving the email open in a
separate window, going on to the next pull request, and then when
build tests are done and I'll start the next one, finish off the old
pending email.
and obviously that first option is the easiest one. I'm not sure what
Greg did, and during the later rc's it probably doesn't matter,
because there likely simply aren't any overlapping operations.
Because yes, the second option likely works fine in most cases, but my
pull might not actually be final *if* something goes bad (where bad
might be just "oops, my tests showed a semantic conflict, I'll need to
fix up my merge" to "I'm going to have to look more closely at that
warning" to "uhhuh, I'm going to just undo the pull entirely because
it ended up being broken").
The third option would work reliably, and not have the "oh, my pull is
only tentatively done" issue. It just adds an annoying back-and-forth
switch to my workflow.
So I'm mainly pinging people I've already pulled to see how much
people actually _care_. Yes, the ack is nice, but do people care
enough that I should try to make that workflow change? Traditionally,
you can see that I've pulled from just seeing the end result when it
actually hits the public tree (which is yet another step removed from
the steps above - I do build tests between every pull, but I generally
tend to push out the end result in batches, usually a couple of times
a day).
Comments?
Linus
On Tue, Oct 23, 2018 at 10:41 AM Linus Torvalds
<[email protected]> wrote:
> In particular, the issue is that after each pull, I do a build test
> before the pull is really "final", and while that build test is
> ongoing (which takes anything from a few minutes to over an hour when
> I'm on the road and using my laptop), I go on and look at the *next*
> pull (or one of the other pending ones).
So that is how you work, we have mental models about
how the upstream maintainers workflow is done but always
guessed it was something like this, nice to know.
> Comments?
I don't need the ACKs becuase there is just always some reason or
nervousness that make me sit and git pull --ff-only all the time during
the merge window because I worry that something will break on mine
or others machines.
Can't you just tool something that mails automatically after-the-fact?
Greg's "notices" that patch so or so was applied are clearly
auto-generated by a script after he applied and tested a whole
bunch of them, the same should be possible for pull requests
methinks? Just something you run after a workday sealing the
deal.
Linus Walleij
On Tue, Oct 23, 2018 at 09:41:32AM +0100, Linus Torvalds wrote:
> Because yes, the second option likely works fine in most cases, but my
> pull might not actually be final *if* something goes bad (where bad
> might be just "oops, my tests showed a semantic conflict, I'll need to
> fix up my merge" to "I'm going to have to look more closely at that
> warning" to "uhhuh, I'm going to just undo the pull entirely because
> it ended up being broken").
Is that a big problem ? I mean probably those who need an ACK just want
to be sure their PR was not lost between them and you. It's not a guarantee
that the code will be kept till the release anyway, and I tend to think
that changing your mind after attempting a build is not different than
changing your mind 3 days later. So when this happens, you're possibly
expected to simply notify the author later saying "sorry, I changed my
mind and finally I dropped your code for this or that reason". That
should be enough to cover the vast majority of use cases, no ?
Just my two cents,
Willy
On Tue, Oct 23, 2018 at 9:53 AM Linus Walleij <[email protected]> wrote:
>
> Can't you just tool something that mails automatically after-the-fact?
So a certain amount of simple/stupid automation would be possible.
That's how the participants list in this email was generated, but the
script I used was actually a pretty much garbage one-liner that just
happens to work for most cases.
It just did my usual "mergelog" (which is a bit like "git shortlog",
it's a script to just get the summary of my merges instead of the
general git logs) and then it used the result of that lookup to look
up the email address by just matching committers.
But it's broken to the point of almost being useless for a couple of reasons:
- my mergelog names don't necessarily match any name in the git history.
For example, Greg goes by "Greg KH" when I merge from him, because
I'm lazy and feel like I don't want to mis-type his name, which I've
done too many times. But in the actual git history, he goes by the
full "Greg Kroah-Hartman", so my stupid script would have messed him
up.
At the other end of the spectrum, people with complex characters
have their names copied-and-pasted from their email or the signature
from their tag, and sometimes those then don't match either.
- some people use one email for "official" purposes (ie company email
etc) in the git history, but actually tend to *use* another email
(because sometimes the company email is slow and/or broken).
- it wouldn't get the usual mailing list cc's etc, and those might be
the most important ones. It is how I saw Greg's replies, after all.
So I feel that he automation model is just not good. The reply should
go to the actual pull request, not to the git history. People who want
just _that_ could already automate the git history thing without me
even doing anything at all, either scripting it themselves or by using
some filtering on the kernel commit mailing list..
So I happened to use the automation model for this email thread, but I
think it's actually the worst of all worlds.
Linus
* Linus Torvalds <[email protected]> wrote:
> So I've got a few options:
>
> - just don't do it
>
> - acking the pull request before it's validated and finalized.
>
> - starting the reply when doing the pull, leaving the email open in a
> separate window, going on to the next pull request, and then when
> build tests are done and I'll start the next one, finish off the old
> pending email.
>
> and obviously that first option is the easiest one. I'm not sure what
> Greg did, and during the later rc's it probably doesn't matter,
> because there likely simply aren't any overlapping operations.
>
> Because yes, the second option likely works fine in most cases, but my
> pull might not actually be final *if* something goes bad (where bad
> might be just "oops, my tests showed a semantic conflict, I'll need to
> fix up my merge" to "I'm going to have to look more closely at that
> warning" to "uhhuh, I'm going to just undo the pull entirely because
> it ended up being broken").
>
> The third option would work reliably, and not have the "oh, my pull is
> only tentatively done" issue. It just adds an annoying back-and-forth
> switch to my workflow.
There's a fourth option I'm using: I use 'zero inbox' mail reading,
last-to-first, and with that I can 'delay' a reply to a pull request or
patch simply by marking the mail unread. Then when I push out tested
trees and patches I go and process the tail of the mbox, a couple of
entries typically. (For patches I don't even have to do anything because
the notification is automatic and I mark the patch read when I see the
tip-bot notification myself.)
It's still a separate workflow step but easier to manage than postponed
emails or separate email windows, which are inevitably going to get lost
in browser mishaps every couple of weeks, and which are not high-profile
enough in the primary workflow either.
Might not be a practical method with the amount of mail you are getting
though ...
> So I'm mainly pinging people I've already pulled to see how much people
> actually _care_. Yes, the ack is nice, but do people care enough that I
> should try to make that workflow change? Traditionally, you can see
> that I've pulled from just seeing the end result when it actually hits
> the public tree (which is yet another step removed from the steps above
> - I do build tests between every pull, but I generally tend to push out
> the end result in batches, usually a couple of times a day).
>
> Comments?
No strong feelings here: occasionally 1 out of 40 pull requests perhaps,
if you don't do a same-day pull or for some reason you are delaying the
pull request (you need to time to think about it, or it's just randomly
sorted differently from other pull requests, etc.) then I just don't know
*why* you didn't pull: are you just thinking about it, or a random delay
because some other tree is causing trouble and you are bisecting, or did
it did it get lost because my email got marked spam?
I'll wait 2-3 days in such cases because when there's something genuinely
wrong with a pull request I definitely don't want to draw your attention
to it but figure it out myself and offer a v2 pull request ... ;-)
If you started sending Acks explicitly there would be more certainty in
these cases.
But it's not a big factor, I'd say the efficiency of your workflow (which
is a single thread) should be the primary concern here.
Thanks,
Ingo
Hi Linus,
On Tue, 23 Oct 2018 09:41:32 +0100
Linus Torvalds <[email protected]> wrote:
> So I'm mainly pinging people I've already pulled to see how much
> people actually _care_. Yes, the ack is nice, but do people care
> enough that I should try to make that workflow change? Traditionally,
> you can see that I've pulled from just seeing the end result when it
> actually hits the public tree (which is yet another step removed from
> the steps above - I do build tests between every pull, but I generally
> tend to push out the end result in batches, usually a couple of times
> a day).
>
> Comments?
I do like to receive notifications when a PR is merged (or when a patch
or patchset is applied). Note that I don't care if this notification is
sent in-reply to the original email or in a separate email. I know Mark
(Brown) has it automated in some way, not sure if it's through patchwork
or if he's using a custom tool, and I'm also not sure it works for pull
requests.
Anyway, it's just a nice thing to have, and I can do without it if it's
too complicated to automate.
Regards,
Boris
On Tue, Oct 23, 2018 at 09:41:32AM +0100, Linus Torvalds wrote:
> In contrast, this email is written "after the fact", just scripting
> "who did I pull for and then push out" by just looking at the git
> tree. Which sucks, because it means that I don't actually answer the
> original email at all, and thus lose any cc's for other people or
> mailing lists. That would literally be done better by simple
> automation.
> So I've got a few options:
> - just don't do it
> - acking the pull request before it's validated and finalized.
> - starting the reply when doing the pull, leaving the email open in a
> separate window, going on to the next pull request, and then when
> build tests are done and I'll start the next one, finish off the old
> pending email.
> and obviously that first option is the easiest one. I'm not sure what
> Greg did, and during the later rc's it probably doesn't matter,
> because there likely simply aren't any overlapping operations.
I have a script that sends people rather lengthy "applied" e-mails when
I push things out with a lot of process blurb that new contributors
might need. It tries to use patchwork to get the message ID and CC
list, though it will just fall back to scraping from git. In my case
it's mainly there to help new contributors know what's going on but a
lot of other people have said they find them useful.
> So I'm mainly pinging people I've already pulled to see how much
> people actually _care_. Yes, the ack is nice, but do people care
> enough that I should try to make that workflow change? Traditionally,
> you can see that I've pulled from just seeing the end result when it
> actually hits the public tree (which is yet another step removed from
> the steps above - I do build tests between every pull, but I generally
> tend to push out the end result in batches, usually a couple of times
> a day).
It doesn't urgently bother me personally, honestly I was a bit alarmed
the first time Greg sent me an ack - your usual workflow is that if
there's any mail it means that there's a problem. So long as it's
consistent.
On Tue, Oct 23, 2018 at 09:41:32AM +0100, Linus Torvalds wrote:
> In particular, the issue is that after each pull, I do a build test
> before the pull is really "final", and while that build test is
> ongoing (which takes anything from a few minutes to over an hour when
> I'm on the road and using my laptop), I go on and look at the *next*
> pull (or one of the other pending ones).
>
> So by the time the build test has finished, the original pull request
> is already long gone - archived and done - and I have moved on.
>
> End result: answering the pull request is somewhat inconvenient to my
> flow, which is why I haven't done it.
I had this same issue, as I had full builds run and had to wait for the
results. But I had a much smaller number of pull requests, so I just
dumped them all into one folder and then did the responses when the
tests came back.
So I had the same issue as you, but you have much more requests to deal
with, sorry.
greg k-h
On Tue, Oct 23, 2018 at 10:10:47AM +0100, Linus Torvalds wrote:
> So I feel that he automation model is just not good. The reply should
> go to the actual pull request, not to the git history. People who want
> just _that_ could already automate the git history thing without me
> even doing anything at all, either scripting it themselves or by using
> some filtering on the kernel commit mailing list..
Can you tag merge commit with message-id of the pull request?
Automation machinery can reply to the pull request with proper CC list
obtained from the archive?
--
Kirill A. Shutemov
On Tue, Oct 23, 2018 at 12:35:22PM +0300, Kirill A. Shutemov wrote:
> On Tue, Oct 23, 2018 at 10:10:47AM +0100, Linus Torvalds wrote:
> > So I feel that he automation model is just not good. The reply should
> > go to the actual pull request, not to the git history. People who want
> > just _that_ could already automate the git history thing without me
> > even doing anything at all, either scripting it themselves or by using
> > some filtering on the kernel commit mailing list..
> Can you tag merge commit with message-id of the pull request?
> Automation machinery can reply to the pull request with proper CC list
> obtained from the archive?
If you're doing that you could even just put all the info you'd get from
the e-mail like the CC list into the tag (or git note or whatever), no
need to bounce to a list archive.
On Tue, 23 Oct 2018, Linus Torvalds wrote:
> Comments?
I'm used to watching the git-commits-head mailing list to see what's being
pulled and don't need anything further as an ack.
--
James Morris
<[email protected]>
On 10/23/18 2:41 AM, Linus Torvalds wrote:
> So I've obviously started pulling stuff for the merge window, and one
> of the things I noticed with Greg doing it for the last few weeks was
> that he has this habit (or automation) to send Ack emails when he
> pulls.
>
> In fact, I reacted to them not being there when he sent himself his
> fake pull messages. Because he didn't then send himself an ack for
> having pulled it ;(
>
> And I actually went into this saying "I'll try to do the same".
>
> But after having actually started doing the pulls, I notice how it
> doesn't work well with my traditional workflow, and so I haven't been
> doing it after all.
>
> In particular, the issue is that after each pull, I do a build test
> before the pull is really "final", and while that build test is
> ongoing (which takes anything from a few minutes to over an hour when
> I'm on the road and using my laptop), I go on and look at the *next*
> pull (or one of the other pending ones).
>
> So by the time the build test has finished, the original pull request
> is already long gone - archived and done - and I have moved on.
>
> End result: answering the pull request is somewhat inconvenient to my
> flow, which is why I haven't done it.
>
> In contrast, this email is written "after the fact", just scripting
> "who did I pull for and then push out" by just looking at the git
> tree. Which sucks, because it means that I don't actually answer the
> original email at all, and thus lose any cc's for other people or
> mailing lists. That would literally be done better by simple
> automation.
>
> So I've got a few options:
>
> - just don't do it
>
> - acking the pull request before it's validated and finalized.
>
> - starting the reply when doing the pull, leaving the email open in a
> separate window, going on to the next pull request, and then when
> build tests are done and I'll start the next one, finish off the old
> pending email.
>
> and obviously that first option is the easiest one. I'm not sure what
> Greg did, and during the later rc's it probably doesn't matter,
> because there likely simply aren't any overlapping operations.
>
> Because yes, the second option likely works fine in most cases, but my
> pull might not actually be final *if* something goes bad (where bad
> might be just "oops, my tests showed a semantic conflict, I'll need to
> fix up my merge" to "I'm going to have to look more closely at that
> warning" to "uhhuh, I'm going to just undo the pull entirely because
> it ended up being broken").
>
> The third option would work reliably, and not have the "oh, my pull is
> only tentatively done" issue. It just adds an annoying back-and-forth
> switch to my workflow.
>
> So I'm mainly pinging people I've already pulled to see how much
> people actually _care_. Yes, the ack is nice, but do people care
> enough that I should try to make that workflow change? Traditionally,
> you can see that I've pulled from just seeing the end result when it
> actually hits the public tree (which is yet another step removed from
> the steps above - I do build tests between every pull, but I generally
> tend to push out the end result in batches, usually a couple of times
> a day).
I like getting an ack when something has been seen, I don't necessarily
need one for when it's also finalized. I'm just going to assume it is,
unless I hear otherwise. I always reply to peoples pulls, even if it's
just saying "Pulled, thanks". What happens when you don't send one is
that:
1) I regularly check the git repo to see if it's actually in.
2) If I do get a reply to one, I cringe. Why? Because it's usually
yelling about something wrong. This means I also more regularly
check email to see if there's yelling queued up.
I'd say do whatever works the best for your workflow, but one of
option 2 or 3 would be preferable. #2 seems like it would fit just fine
with your existing workflow.
--
Jens Axboe
On Tue, 23 Oct 2018 11:02:45 +0200,
Willy Tarreau wrote:
>
> On Tue, Oct 23, 2018 at 09:41:32AM +0100, Linus Torvalds wrote:
> > Because yes, the second option likely works fine in most cases, but my
> > pull might not actually be final *if* something goes bad (where bad
> > might be just "oops, my tests showed a semantic conflict, I'll need to
> > fix up my merge" to "I'm going to have to look more closely at that
> > warning" to "uhhuh, I'm going to just undo the pull entirely because
> > it ended up being broken").
>
> Is that a big problem ? I mean probably those who need an ACK just want
> to be sure their PR was not lost between them and you. It's not a guarantee
> that the code will be kept till the release anyway, and I tend to think
> that changing your mind after attempting a build is not different than
> changing your mind 3 days later. So when this happens, you're possibly
> expected to simply notify the author later saying "sorry, I changed my
> mind and finally I dropped your code for this or that reason". That
> should be enough to cover the vast majority of use cases, no ?
Agreed, the ACK mail doesn't necessarily mean that everything right,
but just ACK that the pull request is being processed. The e-mail
communication can go wrong pretty easily (happened once or twice for
my past PR's), so a simple ACK would relieve me wrt that point -- as
Greg's ACK did indeed.
thanks,
Takashi
On Tue, Oct 23, 2018 at 10:35 AM Kirill A. Shutemov
<[email protected]> wrote:
>
> Can you tag merge commit with message-id of the pull request?
> Automation machinery can reply to the pull request with proper CC list
> obtained from the archive?
If it's a "proper" pull request (ie done by git request-pull), then
the magic marker would be that it as that
for you to fetch changes up to %H:
line where %H is the hash of the tip of the tree that is requested to be pulled.
Then automation could literally just check "is that commit in Linus'
public tree", and when that happens, generate an automatic
notification that the pull request in question has been merged.
So this could all be automated for people who really want to automate
it. I'm not sure I want to do _that_ kind of automation, though. That
sounds more like "maybe something like that would make sense as an
extension of a patchwork-like tool".
I thik I'll just try the "ack when starting the pull" model and see
how that works. Maybe I was overthinking it.
And if it turns out that it would be better to ack after everything
has passed, I could easily just do an email filter for "messages that
are to me, but I have archived and not replied to, and that have 'git
pull' in them".
I use email filters for pinpointing the pulls to begin with, I could
just use email filters to pinpoint the pull requests that I have
already handled.
Linus
On Tue, Oct 23, 2018 at 10:03 AM Willy Tarreau <[email protected]> wrote:
>
> On Tue, Oct 23, 2018 at 09:41:32AM +0100, Linus Torvalds wrote:
> > Because yes, the second option likely works fine in most cases, but my
> > pull might not actually be final *if* something goes bad (where bad
> > might be just "oops, my tests showed a semantic conflict, I'll need to
> > fix up my merge" to "I'm going to have to look more closely at that
> > warning" to "uhhuh, I'm going to just undo the pull entirely because
> > it ended up being broken").
>
> Is that a big problem ? I mean probably those who need an ACK just want
> to be sure their PR was not lost between them and you.
That second case is what I personally suspect is the best balance
between convenience and "works most of the time".
And the "tentative pull" is almost always the final one. In fact, my
previous flow was to only send out emails in the (rare) situation
where it wasn't - letting people know that I _tried_ to pull, but
there was some issue that resulted in me unpulling after the fact.
And that email wouldn't go away, so if I first send a "Pulled" ack
message, and then something bad happens and I unpull it, I would send
a second email anyway saying "oh, oops, not pulled after all".
I'm actually slightly hoping that people will just say they don't even
care, but I suspect people _did_ like getting the ack emails.
Linus
On Tue, Oct 23, 2018 at 11:17:38AM +0200, Boris Brezillon wrote:
> sent in-reply to the original email or in a separate email. I know Mark
> (Brown) has it automated in some way, not sure if it's through patchwork
> or if he's using a custom tool, and I'm also not sure it works for pull
> requests.
My tool works only for commits, Greg's appears to have a similar
limiation to mine as well.
On 23 October 2018 at 10:41, Linus Torvalds
<[email protected]> wrote:
> So I've obviously started pulling stuff for the merge window, and one
> of the things I noticed with Greg doing it for the last few weeks was
> that he has this habit (or automation) to send Ack emails when he
> pulls.
>
> In fact, I reacted to them not being there when he sent himself his
> fake pull messages. Because he didn't then send himself an ack for
> having pulled it ;(
>
> And I actually went into this saying "I'll try to do the same".
>
> But after having actually started doing the pulls, I notice how it
> doesn't work well with my traditional workflow, and so I haven't been
> doing it after all.
>
> In particular, the issue is that after each pull, I do a build test
> before the pull is really "final", and while that build test is
> ongoing (which takes anything from a few minutes to over an hour when
> I'm on the road and using my laptop), I go on and look at the *next*
> pull (or one of the other pending ones).
>
> So by the time the build test has finished, the original pull request
> is already long gone - archived and done - and I have moved on.
>
> End result: answering the pull request is somewhat inconvenient to my
> flow, which is why I haven't done it.
>
> In contrast, this email is written "after the fact", just scripting
> "who did I pull for and then push out" by just looking at the git
> tree. Which sucks, because it means that I don't actually answer the
> original email at all, and thus lose any cc's for other people or
> mailing lists. That would literally be done better by simple
> automation.
>
> So I've got a few options:
>
> - just don't do it
>
> - acking the pull request before it's validated and finalized.
>
> - starting the reply when doing the pull, leaving the email open in a
> separate window, going on to the next pull request, and then when
> build tests are done and I'll start the next one, finish off the old
> pending email.
>
> and obviously that first option is the easiest one. I'm not sure what
> Greg did, and during the later rc's it probably doesn't matter,
> because there likely simply aren't any overlapping operations.
>
> Because yes, the second option likely works fine in most cases, but my
> pull might not actually be final *if* something goes bad (where bad
> might be just "oops, my tests showed a semantic conflict, I'll need to
> fix up my merge" to "I'm going to have to look more closely at that
> warning" to "uhhuh, I'm going to just undo the pull entirely because
> it ended up being broken").
>
> The third option would work reliably, and not have the "oh, my pull is
> only tentatively done" issue. It just adds an annoying back-and-forth
> switch to my workflow.
>
> So I'm mainly pinging people I've already pulled to see how much
> people actually _care_. Yes, the ack is nice, but do people care
> enough that I should try to make that workflow change? Traditionally,
> you can see that I've pulled from just seeing the end result when it
> actually hits the public tree (which is yet another step removed from
> the steps above - I do build tests between every pull, but I generally
> tend to push out the end result in batches, usually a couple of times
> a day).
>
> Comments?
Welcome back!
I have no strong opinions, in regards to the acks.
Your current approach, with no ack at all, just means that I have to
do "git remote update" a few times, which I probably would have done
anyways. So, to me, feel free to pick whatever option that makes the
life easiest for you.
Kind regards
Uffe
On Tue, Oct 23, 2018 at 9:42 AM Linus Torvalds
<[email protected]> wrote:
>
> So I've obviously started pulling stuff for the merge window, and one
> of the things I noticed with Greg doing it for the last few weeks was
> that he has this habit (or automation) to send Ack emails when he
> pulls.
>
> In fact, I reacted to them not being there when he sent himself his
> fake pull messages. Because he didn't then send himself an ack for
> having pulled it ;(
>
> And I actually went into this saying "I'll try to do the same".
>
> But after having actually started doing the pulls, I notice how it
> doesn't work well with my traditional workflow, and so I haven't been
> doing it after all.
>
> In particular, the issue is that after each pull, I do a build test
> before the pull is really "final", and while that build test is
> ongoing (which takes anything from a few minutes to over an hour when
> I'm on the road and using my laptop), I go on and look at the *next*
> pull (or one of the other pending ones).
>
> So by the time the build test has finished, the original pull request
> is already long gone - archived and done - and I have moved on.
>
> End result: answering the pull request is somewhat inconvenient to my
> flow, which is why I haven't done it.
>
> In contrast, this email is written "after the fact", just scripting
> "who did I pull for and then push out" by just looking at the git
> tree. Which sucks, because it means that I don't actually answer the
> original email at all, and thus lose any cc's for other people or
> mailing lists. That would literally be done better by simple
> automation.
>
> So I've got a few options:
>
> - just don't do it
>
> - acking the pull request before it's validated and finalized.
>
> - starting the reply when doing the pull, leaving the email open in a
> separate window, going on to the next pull request, and then when
> build tests are done and I'll start the next one, finish off the old
> pending email.
>
> and obviously that first option is the easiest one. I'm not sure what
> Greg did, and during the later rc's it probably doesn't matter,
> because there likely simply aren't any overlapping operations.
It's funny, because the first time I saw a reply from Greg on a pull
request, I thought I had done something wrong -- I've been so used to
only getting replies when there's something not right with it.
Like others, I'm used to polling for material showing up, and either
way is fine with me.
For pull requests we do, we normally reply (since it makes it easier
to see what pull requests have been handled when you share them). In
my case, I write the reply immediately, but I use msmtp-queue and mutt
to do it, and don't send the queue until I'm done with the current
batch of pull requests, so I sometimes go back and revoke a message
before it has gone out. It doesn't work for web-gmail use cases.
> Because yes, the second option likely works fine in most cases, but my
> pull might not actually be final *if* something goes bad (where bad
> might be just "oops, my tests showed a semantic conflict, I'll need to
> fix up my merge" to "I'm going to have to look more closely at that
> warning" to "uhhuh, I'm going to just undo the pull entirely because
> it ended up being broken").
1 + the last follow-up would be fine with me. I doubt anyone will just
delete their material within minutes of getting the initial reply.
-Olof
On Tue, Oct 23, 2018 at 10:46:06AM +0100, Linus Torvalds wrote:
>If it's a "proper" pull request (ie done by git request-pull), then
>the magic marker would be that it as that
>
> for you to fetch changes up to %H:
>
>line where %H is the hash of the tip of the tree that is requested to be pulled.
>
>Then automation could literally just check "is that commit in Linus'
>public tree", and when that happens, generate an automatic
>notification that the pull request in question has been merged.
I can probably do something like that at kernel.org. How about something
more generic -- e.g. a simple tool that asks a remote web service to
notify you when a commit-id is seen in one of the kernel.org repos?
E.g.:
git lmk for-linus mainline
this does:
- find out the commit-id points at "for-linus"
- send a REST request to https://foo.kerkel.org/lmk:
{
"tree": "mainline",
"commit": "123abc...abc555",
"notify": "(output of $(git config user.email)"
}
We already run a bunch of periodic jobs on repo updates and can run an
additional check-and-fire-an-email automation job.
Would that be a useful alternative? If yes, what would be your preferred
workflow for such tool instead of "git lmk [commit] [tree-moniker]"?
-K
On 10/23/2018 02:13 PM, Ulf Hansson wrote:
> On 23 October 2018 at 10:41, Linus Torvalds
> <[email protected]> wrote:
>> So I've obviously started pulling stuff for the merge window, and one
>> of the things I noticed with Greg doing it for the last few weeks was
>> that he has this habit (or automation) to send Ack emails when he
>> pulls.
>>
>> In fact, I reacted to them not being there when he sent himself his
>> fake pull messages. Because he didn't then send himself an ack for
>> having pulled it ;(
>>
>> And I actually went into this saying "I'll try to do the same".
>>
>> But after having actually started doing the pulls, I notice how it
>> doesn't work well with my traditional workflow, and so I haven't been
>> doing it after all.
>>
>> In particular, the issue is that after each pull, I do a build test
>> before the pull is really "final", and while that build test is
>> ongoing (which takes anything from a few minutes to over an hour when
>> I'm on the road and using my laptop), I go on and look at the *next*
>> pull (or one of the other pending ones).
>>
>> So by the time the build test has finished, the original pull request
>> is already long gone - archived and done - and I have moved on.
>>
>> End result: answering the pull request is somewhat inconvenient to my
>> flow, which is why I haven't done it.
>>
>> In contrast, this email is written "after the fact", just scripting
>> "who did I pull for and then push out" by just looking at the git
>> tree. Which sucks, because it means that I don't actually answer the
>> original email at all, and thus lose any cc's for other people or
>> mailing lists. That would literally be done better by simple
>> automation.
>>
>> So I've got a few options:
>>
>> - just don't do it
>>
>> - acking the pull request before it's validated and finalized.
>>
>> - starting the reply when doing the pull, leaving the email open in a
>> separate window, going on to the next pull request, and then when
>> build tests are done and I'll start the next one, finish off the old
>> pending email.
>>
>> and obviously that first option is the easiest one. I'm not sure what
>> Greg did, and during the later rc's it probably doesn't matter,
>> because there likely simply aren't any overlapping operations.
>>
>> Because yes, the second option likely works fine in most cases, but my
>> pull might not actually be final *if* something goes bad (where bad
>> might be just "oops, my tests showed a semantic conflict, I'll need to
>> fix up my merge" to "I'm going to have to look more closely at that
>> warning" to "uhhuh, I'm going to just undo the pull entirely because
>> it ended up being broken").
>>
>> The third option would work reliably, and not have the "oh, my pull is
>> only tentatively done" issue. It just adds an annoying back-and-forth
>> switch to my workflow.
>>
>> So I'm mainly pinging people I've already pulled to see how much
>> people actually _care_. Yes, the ack is nice, but do people care
>> enough that I should try to make that workflow change? Traditionally,
>> you can see that I've pulled from just seeing the end result when it
>> actually hits the public tree (which is yet another step removed from
>> the steps above - I do build tests between every pull, but I generally
>> tend to push out the end result in batches, usually a couple of times
>> a day).
>>
>> Comments?
>
> Welcome back!
>
> I have no strong opinions, in regards to the acks.
>
> Your current approach, with no ack at all, just means that I have to
> do "git remote update" a few times, which I probably would have done
> anyways. So, to me, feel free to pick whatever option that makes the
> life easiest for you.
Same for me, I do the update anyway to see if and how my pull request
has been merged.
--
Best regards,
Jacek Anaszewski
On Tue, Oct 23, 2018 at 1:41 AM, Linus Torvalds
<[email protected]> wrote:
> So I've got a few options:
>
> - just don't do it
As with other folks, this is what we're used to, but it does cause a
lot of "polling" your tree to see what's landed. (And your "Pulled"
email to pstore today scared the crap out of me briefly -- it made me
go look for this thread...)
I enjoyed getting Greg's "Pulled" emails for post-rc4, since it closes
the loop. I've always hugely preferred getting "Applied" etc emails,
and I try to make sure I always send them too.
> - acking the pull request before it's validated and finalized.
While this can work, I would find it personally only a little useful
since it doesn't actually contain the information I (and any folks
contributing to the pulled patches) need: has it landed? When I send a
pull request for security hardening things, I'm mentally wearing my
seasoned asbestos suit until I see the PR has landed. (Other trees of
mine like pstore don't tend to trigger rants, so those are likely just
fine for this notification method.)
> - starting the reply when doing the pull, leaving the email open in a
> separate window, going on to the next pull request, and then when
> build tests are done and I'll start the next one, finish off the old
> pending email.
This sounds like an annoying fragmentation of your workflow. I thought
Mark and Kirill's suggestion to stash the PR Message-Id in your merge
commit would be pretty easy to automate, though. (And may just be a
good bit of record-keeping anyway...)
On the balance, I think since most things you start to pull are, in
fact, pulled, the "send at start" method covers most cases and does
let people know when you've gotten to their PR. And I can spend less
time wearing my preparatory asbestos -- just from "Pulled" email until
I see it land. ;)
--
Kees Cook
I'm back home, slightly jetl-agged, but _oh_ so relieved to not be
doing the merge window on a laptop any more.
I've been continuing to just manually ack the pull requests, but I've
almost forgotten a few times (and maybe I _did_ forget one or two and
didn't catch it? Who knows?).
So while maybe just continuing to do this means that it becomes second
nature, I'm starting to think that mailing list automation really
would be a good idea:
On Tue, Oct 23, 2018 at 1:04 PM Konstantin Ryabitsev
<[email protected]> wrote:
>
> On Tue, Oct 23, 2018 at 10:46:06AM +0100, Linus Torvalds wrote:
> >If it's a "proper" pull request (ie done by git request-pull), then
> >the magic marker would be that it as that
> >
> > for you to fetch changes up to %H:
> >
> >line where %H is the hash of the tip of the tree that is requested to be pulled.
> >
> >Then automation could literally just check "is that commit in Linus'
> >public tree", and when that happens, generate an automatic
> >notification that the pull request in question has been merged.
>
> I can probably do something like that at kernel.org. How about something
> more generic -- e.g. a simple tool that asks a remote web service to
> notify you when a commit-id is seen in one of the kernel.org repos?
So I think it might be good to have some generic model for "give me a
trigger when XYZ hits git tree ABC" that people could just do in
general, *but* I think the "scan mailing lists for regular pull
requests" would actually be nicer.
Maybe it would be just a special-case wrapper around a more generic
thing, but this:
> - send a REST request to https://foo.kerkel.org/lmk:
>
> {
> "tree": "mainline",
> "commit": "123abc...abc555",
> "notify": "(output of $(git config user.email)"
> }
doesn't really sound all that nice for the "I sent a git pull request,
and want to be notified".
It would be much nicer if the "notification" really did the right
thing, and created an actual email follow-up, with the correct To/Cc
and subject lines, but also the proper "References" line so that it
actually gets threaded properly too.
That implies that it really should be integrated into the mailing list itself.
But I don't know how flexible the whole lkml archive bot is for things
like this. But I assume you have _some_ hook into new messages coming
in for lore.kernel.org?
> Would that be a useful alternative? If yes, what would be your preferred
> workflow for such tool instead of "git lmk [commit] [tree-moniker]"?
I really do suspect that "I sent out a pull request, I'd like to be
automatically notified when it gets upstream" would be the primary
thing.
And by "upstreamed" it isn't necessarily just my tree, of course.
Are there other situations where you might want to track something
_outside_ of a pull request? Maybe. I can't really think of a lot of
them, though. Patches etc don't have commit ID's to track, but it
*might* be interesting to see similar automation just based on the git
patch-ID. But that sounds more like a patchwork issue than something
like "track pull requests".
But this might be one of those "maybe a quick prototype gives people
ideas". Sometimes people _really_ hate automation, but it sounds to me
like it would be a lovely thing to have.
Linus
On Thu, Oct 25, 2018 at 9:14 AM Linus Torvalds
<[email protected]> wrote:
>
> I'm back home, slightly jetl-agged, but _oh_ so relieved to not be
> doing the merge window on a laptop any more.
>
> I've been continuing to just manually ack the pull requests, but I've
> almost forgotten a few times (and maybe I _did_ forget one or two and
> didn't catch it? Who knows?).
>
> So while maybe just continuing to do this means that it becomes second
> nature, I'm starting to think that mailing list automation really
> would be a good idea:
>
> On Tue, Oct 23, 2018 at 1:04 PM Konstantin Ryabitsev
> <[email protected]> wrote:
> >
> > On Tue, Oct 23, 2018 at 10:46:06AM +0100, Linus Torvalds wrote:
> > >If it's a "proper" pull request (ie done by git request-pull), then
> > >the magic marker would be that it as that
> > >
> > > for you to fetch changes up to %H:
> > >
> > >line where %H is the hash of the tip of the tree that is requested to be pulled.
> > >
> > >Then automation could literally just check "is that commit in Linus'
> > >public tree", and when that happens, generate an automatic
> > >notification that the pull request in question has been merged.
> >
> > I can probably do something like that at kernel.org. How about something
> > more generic -- e.g. a simple tool that asks a remote web service to
> > notify you when a commit-id is seen in one of the kernel.org repos?
>
> So I think it might be good to have some generic model for "give me a
> trigger when XYZ hits git tree ABC" that people could just do in
> general, *but* I think the "scan mailing lists for regular pull
> requests" would actually be nicer.
>
> Maybe it would be just a special-case wrapper around a more generic
> thing, but this:
>
> > - send a REST request to https://foo.kerkel.org/lmk:
> >
> > {
> > "tree": "mainline",
> > "commit": "123abc...abc555",
> > "notify": "(output of $(git config user.email)"
> > }
>
> doesn't really sound all that nice for the "I sent a git pull request,
> and want to be notified".
>
> It would be much nicer if the "notification" really did the right
> thing, and created an actual email follow-up, with the correct To/Cc
> and subject lines, but also the proper "References" line so that it
> actually gets threaded properly too.
>
> That implies that it really should be integrated into the mailing list itself.
>
> But I don't know how flexible the whole lkml archive bot is for things
> like this. But I assume you have _some_ hook into new messages coming
> in for lore.kernel.org?
>
> > Would that be a useful alternative? If yes, what would be your preferred
> > workflow for such tool instead of "git lmk [commit] [tree-moniker]"?
>
> I really do suspect that "I sent out a pull request, I'd like to be
> automatically notified when it gets upstream" would be the primary
> thing.
>
> And by "upstreamed" it isn't necessarily just my tree, of course.
>
> Are there other situations where you might want to track something
> _outside_ of a pull request? Maybe. I can't really think of a lot of
> them, though. Patches etc don't have commit ID's to track, but it
> *might* be interesting to see similar automation just based on the git
> patch-ID. But that sounds more like a patchwork issue than something
> like "track pull requests".
I would very much like to see something that works for patches too.
There's a lot of tribal knowledge needed for submitters to learn as to
what is each maintainer's process. Reducing that would be beneficial
IMO, and more solvable than discussions around non-email based
submissions. For example, with Greg and Mark B you can expect an
automated replies. Mark's reply gets threaded with the original, but
Greg's do not. For networking, you may or may not get a manual reply,
but patchwork always has the status if you know to go check it. In
reviewing patches I want to know the status too, but that's somewhat
my unique position of reviewing bindings which mostly other
maintainers apply. I've somewhat solved it for myself by automating
checking linux-next, but maybe automated email replies to patches
being in linux-next would be nice. While that's not immediate, it
should be quick enough. And I'd like to have automated replies sent on
patches I apply, but I'm lazy and haven't managed to set that up yet.
BTW, patchwork tracks pull requests too, so maybe there's a common solution.
Rob
On Fri, Oct 26, 2018 at 12:36:14PM -0500, Rob Herring wrote:
> On Thu, Oct 25, 2018 at 9:14 AM Linus Torvalds
> <[email protected]> wrote:
> > Are there other situations where you might want to track something
> > _outside_ of a pull request? Maybe. I can't really think of a lot of
> > them, though. Patches etc don't have commit ID's to track, but it
patchwork gives them IDs and lets you do lookups using them, that's what
I'm doing. You can get the ID from a git commit by piping the output of
git show into parser.py from the patchwork source, it works a lot of the
time but things like editing the commit message will break it (this is a
theme with my scripting around the mail stuff...).
> submissions. For example, with Greg and Mark B you can expect an
> automated replies. Mark's reply gets threaded with the original, but
> Greg's do not. For networking, you may or may not get a manual reply,
Mine *mostly* gets threaded, it's relying on being able to talk to
patchwork to figure out the message ID at the minute so if the patchwork
lookup fails for whatever reason it'll just use on what's in the commit
for the CC list and not thread. That isn't ideal, especially when I'm
travelling and my network connection isn't the best, I keep meaning to
try to figure out a better way which would probably be based on git
notes as discussed earlier.
> maintainers apply. I've somewhat solved it for myself by automating
> checking linux-next, but maybe automated email replies to patches
> being in linux-next would be nice. While that's not immediate, it
Yeah, I do that as well (I have an outbound patch queue I rebase against
-next, this also tells me if stuff starts conflicting with other work).
I can see the automated e-mails be useful but it might be tricky for a
bot that's only looking at git to figure out which people and lists to
CC to ensure visibility unless we do the annotation thing, it's not just
the patch submitters that want visibility - I did the patchwork stuff
due to user demand for that with some help from Brian Norris.
> BTW, patchwork tracks pull requests too, so maybe there's a common solution.
Ooh, interesting...
On Thu, Oct 25, 2018 at 07:13:59AM -0700, Linus Torvalds wrote:
> It would be much nicer if the "notification" really did the right
> thing, and created an actual email follow-up, with the correct To/Cc
> and subject lines, but also the proper "References" line so that it
> actually gets threaded properly too.
>
> That implies that it really should be integrated into the mailing list itself.
>
> But I don't know how flexible the whole lkml archive bot is for things
> like this. But I assume you have _some_ hook into new messages coming
> in for lore.kernel.org?
>
> > Would that be a useful alternative? If yes, what would be your preferred
> > workflow for such tool instead of "git lmk [commit] [tree-moniker]"?
>
> I really do suspect that "I sent out a pull request, I'd like to be
> automatically notified when it gets upstream" would be the primary
> thing.
>
> And by "upstreamed" it isn't necessarily just my tree, of course.
I should have something working soon, hopefully -- which should be
sufficiently generic to be adapted for other devs.
Regarding your case specifically, what's a good cutoff period for
treating a pull request as effectively ignored/abandoned (i.e. no
matching commit-id ever found in the repo). I'm guessing about a month,
or do you want to go longer, in case something shifts to the following
merge window?
Regards,
-K
On Wed, Oct 31, 2018 at 7:28 AM Konstantin Ryabitsev
<[email protected]> wrote:
>
> Regarding your case specifically, what's a good cutoff period for
> treating a pull request as effectively ignored/abandoned (i.e. no
> matching commit-id ever found in the repo). I'm guessing about a month,
> or do you want to go longer, in case something shifts to the following
> merge window?
Oh, I'd definitely not go longer - if anything, I think it could be shorter.
My *normal* reaction time is on the order of days. But yes, every
merge window there are a couple of pulls that I end up delaying to the
end of the merge window when I'm supposed to have more time to really
review them. Right now I have three such pull requests pending, for
example (and had planned to look at them today, but then new "normal"
pull requests happened, so I still haven't gotten around to them).
But even when those things get put in my queue, the queue shouldn't be
longer than the 2-week merge window, and if it is, I end up responding
separately (ie writing people "ok, I'm still mulling this over, but
it's not making rc1").
So I think a one-month queue is more than sufficient, and if there are
reasons to time things out earlier, a two-week one would be perfectly
fine too.
Linus
Mark Brown <[email protected]> writes:
> On Fri, Oct 26, 2018 at 12:36:14PM -0500, Rob Herring wrote:
>> On Thu, Oct 25, 2018 at 9:14 AM Linus Torvalds
>> <[email protected]> wrote:
>
>> > Are there other situations where you might want to track something
>> > _outside_ of a pull request? Maybe. I can't really think of a lot of
>> > them, though. Patches etc don't have commit ID's to track, but it
>
> patchwork gives them IDs and lets you do lookups using them, that's what
> I'm doing. You can get the ID from a git commit by piping the output of
> git show into parser.py from the patchwork source, it works a lot of the
> time but things like editing the commit message will break it (this is a
> theme with my scripting around the mail stuff...).
>
>> submissions. For example, with Greg and Mark B you can expect an
>> automated replies. Mark's reply gets threaded with the original, but
>> Greg's do not. For networking, you may or may not get a manual reply,
>
> Mine *mostly* gets threaded, it's relying on being able to talk to
> patchwork to figure out the message ID at the minute so if the patchwork
> lookup fails for whatever reason it'll just use on what's in the commit
> for the CC list and not thread. That isn't ideal, especially when I'm
> travelling and my network connection isn't the best, I keep meaning to
> try to figure out a better way which would probably be based on git
> notes as discussed earlier.
Yeah I use git notes for this.
When I apply a patch I record the patchwork id in a git note, I have a
custom hacked pwclient that does it automatically. I also download the
full mbox from patchwork and stash it in .git/patchwork/<patch id>.
Then I have everything I need to generate a properly threaded reply to
the original mail.
The git notes work well, if you add the following to your .git/config:
[notes]
rewriteRef = refs/notes/*
displayRef = refs/notes/*
Then all notes are copied when you rewrite a commit (rebase), and also
displayed by eg. git show.
Every now and then if you do extensive rebasing/splitting you get
commits with the wrong or no patchwork ids. But that's pretty rare and
not that hard to fixup when it happens.
There's a slightly sanitised version of some of my scripts here:
https://github.com/mpe/patchwork-scripts
cheers
Hello Michael,
On Thu, 01 Nov 2018 21:18:28 +1100
Michael Ellerman <[email protected]> wrote:
> Mark Brown <[email protected]> writes:
>
> > On Fri, Oct 26, 2018 at 12:36:14PM -0500, Rob Herring wrote:
> >> On Thu, Oct 25, 2018 at 9:14 AM Linus Torvalds
> >> <[email protected]> wrote:
> >
> >> > Are there other situations where you might want to track something
> >> > _outside_ of a pull request? Maybe. I can't really think of a lot of
> >> > them, though. Patches etc don't have commit ID's to track, but it
> >
> > patchwork gives them IDs and lets you do lookups using them, that's what
> > I'm doing. You can get the ID from a git commit by piping the output of
> > git show into parser.py from the patchwork source, it works a lot of the
> > time but things like editing the commit message will break it (this is a
> > theme with my scripting around the mail stuff...).
> >
> >> submissions. For example, with Greg and Mark B you can expect an
> >> automated replies. Mark's reply gets threaded with the original, but
> >> Greg's do not. For networking, you may or may not get a manual reply,
> >
> > Mine *mostly* gets threaded, it's relying on being able to talk to
> > patchwork to figure out the message ID at the minute so if the patchwork
> > lookup fails for whatever reason it'll just use on what's in the commit
> > for the CC list and not thread. That isn't ideal, especially when I'm
> > travelling and my network connection isn't the best, I keep meaning to
> > try to figure out a better way which would probably be based on git
> > notes as discussed earlier.
>
> Yeah I use git notes for this.
>
> When I apply a patch I record the patchwork id in a git note, I have a
> custom hacked pwclient that does it automatically. I also download the
> full mbox from patchwork and stash it in .git/patchwork/<patch id>.
>
> Then I have everything I need to generate a properly threaded reply to
> the original mail.
>
> The git notes work well, if you add the following to your .git/config:
>
> [notes]
> rewriteRef = refs/notes/*
> displayRef = refs/notes/*
>
> Then all notes are copied when you rewrite a commit (rebase), and also
> displayed by eg. git show.
>
> Every now and then if you do extensive rebasing/splitting you get
> commits with the wrong or no patchwork ids. But that's pretty rare and
> not that hard to fixup when it happens.
>
> There's a slightly sanitised version of some of my scripts here:
> https://github.com/mpe/patchwork-scripts
I had pretty much the same workflow to automatically update the patch
status in patchwork when I push things to the MTD tree, but I was
lacking the part sending notifications (this was done manually).
With your scripts this is now addressed, thanks a lot for sharing
them!
Boris
Boris Brezillon <[email protected]> writes:
> Hello Michael,
>
> On Thu, 01 Nov 2018 21:18:28 +1100
> Michael Ellerman <[email protected]> wrote:
>
>> Mark Brown <[email protected]> writes:
>>
>> > On Fri, Oct 26, 2018 at 12:36:14PM -0500, Rob Herring wrote:
>> >> On Thu, Oct 25, 2018 at 9:14 AM Linus Torvalds
>> >> <[email protected]> wrote:
>> >
>> >> > Are there other situations where you might want to track something
>> >> > _outside_ of a pull request? Maybe. I can't really think of a lot of
>> >> > them, though. Patches etc don't have commit ID's to track, but it
>> >
>> > patchwork gives them IDs and lets you do lookups using them, that's what
>> > I'm doing. You can get the ID from a git commit by piping the output of
>> > git show into parser.py from the patchwork source, it works a lot of the
>> > time but things like editing the commit message will break it (this is a
>> > theme with my scripting around the mail stuff...).
>> >
>> >> submissions. For example, with Greg and Mark B you can expect an
>> >> automated replies. Mark's reply gets threaded with the original, but
>> >> Greg's do not. For networking, you may or may not get a manual reply,
>> >
>> > Mine *mostly* gets threaded, it's relying on being able to talk to
>> > patchwork to figure out the message ID at the minute so if the patchwork
>> > lookup fails for whatever reason it'll just use on what's in the commit
>> > for the CC list and not thread. That isn't ideal, especially when I'm
>> > travelling and my network connection isn't the best, I keep meaning to
>> > try to figure out a better way which would probably be based on git
>> > notes as discussed earlier.
>>
>> Yeah I use git notes for this.
>>
>> When I apply a patch I record the patchwork id in a git note, I have a
>> custom hacked pwclient that does it automatically. I also download the
>> full mbox from patchwork and stash it in .git/patchwork/<patch id>.
>>
>> Then I have everything I need to generate a properly threaded reply to
>> the original mail.
>>
>> The git notes work well, if you add the following to your .git/config:
>>
>> [notes]
>> rewriteRef = refs/notes/*
>> displayRef = refs/notes/*
>>
>> Then all notes are copied when you rewrite a commit (rebase), and also
>> displayed by eg. git show.
>>
>> Every now and then if you do extensive rebasing/splitting you get
>> commits with the wrong or no patchwork ids. But that's pretty rare and
>> not that hard to fixup when it happens.
>>
>> There's a slightly sanitised version of some of my scripts here:
>> https://github.com/mpe/patchwork-scripts
>
> I had pretty much the same workflow to automatically update the patch
> status in patchwork when I push things to the MTD tree, but I was
> lacking the part sending notifications (this was done manually).
>
> With your scripts this is now addressed, thanks a lot for sharing
> them!
Awesome, glad they helped!
I have some modifications locally to detect when I've merged an entire
series and only reply to the first patch. At the moment that's all a bit
too hacky for public viewing, but I'll try and clean it up at some point
and push it out :)
cheers