Time |
Nick |
Message |
00:00 |
|
csharp_ joined #evergreen |
00:01 |
|
bshum joined #evergreen |
00:01 |
|
bshum joined #evergreen |
00:09 |
|
serflog joined #evergreen |
00:34 |
|
serflog joined #evergreen |
01:11 |
|
serflog joined #evergreen |
01:11 |
|
Topic for #evergreen is now Welcome to the #evergreen library system channel! | We are publicly logged. | Large pastes at http://paste.evergreen-ils.org |
01:17 |
|
fparks joined #evergreen |
01:19 |
|
berickm joined #evergreen |
01:19 |
|
bshum joined #evergreen |
01:26 |
|
fparks joined #evergreen |
01:29 |
|
bradl joined #evergreen |
01:31 |
|
dconnor joined #evergreen |
01:34 |
|
fparks joined #evergreen |
01:35 |
|
dconnor__ joined #evergreen |
01:38 |
|
gdunbar joined #evergreen |
01:42 |
|
fparks joined #evergreen |
01:50 |
|
jcamins_ joined #evergreen |
01:50 |
|
bradl joined #evergreen |
01:50 |
|
gmcharlt joined #evergreen |
01:51 |
|
berickm joined #evergreen |
02:04 |
|
gmcharlt joined #evergreen |
02:04 |
|
36DACC20Q joined #evergreen |
02:18 |
|
gmcharlt joined #evergreen |
02:22 |
|
berick joined #evergreen |
02:22 |
|
bradl joined #evergreen |
02:41 |
|
_bott_ joined #evergreen |
02:41 |
|
jeff joined #evergreen |
03:04 |
|
jeffdavis joined #evergreen |
03:13 |
|
gdunbar joined #evergreen |
03:16 |
|
kmlussier joined #evergreen |
03:17 |
|
bradl joined #evergreen |
03:30 |
|
bradl joined #evergreen |
03:30 |
|
berick joined #evergreen |
03:30 |
|
gmcharlt joined #evergreen |
03:30 |
|
fparks joined #evergreen |
03:30 |
|
bshum joined #evergreen |
03:30 |
|
tsbere__ joined #evergreen |
03:30 |
|
17SAAFEOK joined #evergreen |
03:30 |
|
mceraso joined #evergreen |
03:30 |
|
shadowspar joined #evergreen |
03:30 |
|
eeevil joined #evergreen |
03:30 |
|
phasefx2 joined #evergreen |
03:30 |
|
eby joined #evergreen |
03:30 |
|
ningalls joined #evergreen |
03:30 |
|
linuxhiker_away joined #evergreen |
03:30 |
|
wjr joined #evergreen |
03:30 |
|
pastebot joined #evergreen |
03:30 |
|
ktomita joined #evergreen |
03:30 |
|
paxed joined #evergreen |
03:30 |
|
rangi joined #evergreen |
03:30 |
|
berick__ joined #evergreen |
03:30 |
|
b_bonner joined #evergreen |
03:30 |
|
rri_ joined #evergreen |
03:30 |
|
zxiiro_ joined #evergreen |
03:30 |
|
dbwells_ joined #evergreen |
03:30 |
|
mtcarlson_away joined #evergreen |
03:30 |
|
ldwhalen joined #evergreen |
03:30 |
|
dkyle joined #evergreen |
03:30 |
|
pmurray joined #evergreen |
03:30 |
|
jeff_ joined #evergreen |
03:30 |
|
dbs joined #evergreen |
03:37 |
|
fparks_ joined #evergreen |
03:37 |
|
jeffdavis joined #evergreen |
03:37 |
|
gdunbar joined #evergreen |
03:37 |
|
jcamins joined #evergreen |
03:37 |
|
berick joined #evergreen |
03:37 |
|
gmcharlt joined #evergreen |
03:37 |
|
fparks joined #evergreen |
03:37 |
|
bshum joined #evergreen |
03:37 |
|
tsbere__ joined #evergreen |
03:37 |
|
17SAAFEOK joined #evergreen |
03:37 |
|
mceraso joined #evergreen |
03:37 |
|
shadowspar joined #evergreen |
03:37 |
|
eeevil joined #evergreen |
03:37 |
|
phasefx2 joined #evergreen |
03:37 |
|
eby joined #evergreen |
03:37 |
|
ningalls joined #evergreen |
03:37 |
|
linuxhiker_away joined #evergreen |
03:37 |
|
wjr joined #evergreen |
03:37 |
|
pastebot joined #evergreen |
03:37 |
|
ktomita joined #evergreen |
03:37 |
|
paxed joined #evergreen |
03:37 |
|
rangi joined #evergreen |
03:37 |
|
berick__ joined #evergreen |
03:37 |
|
b_bonner joined #evergreen |
03:37 |
|
rri_ joined #evergreen |
03:37 |
|
zxiiro_ joined #evergreen |
03:37 |
|
dbwells_ joined #evergreen |
03:37 |
|
mtcarlson_away joined #evergreen |
03:37 |
|
ldwhalen joined #evergreen |
03:37 |
|
dkyle joined #evergreen |
03:37 |
|
pmurray joined #evergreen |
03:37 |
|
jeff_ joined #evergreen |
03:37 |
|
dbs joined #evergreen |
03:37 |
|
bradl joined #evergreen |
03:42 |
|
_bott_ joined #evergreen |
03:42 |
|
jeff joined #evergreen |
03:45 |
|
jeffdavis joined #evergreen |
03:45 |
|
berickm joined #evergreen |
03:49 |
|
kmlussier joined #evergreen |
03:56 |
|
kmlussier joined #evergreen |
03:56 |
|
berickm joined #evergreen |
03:56 |
|
bradl joined #evergreen |
03:56 |
|
fparks_ joined #evergreen |
03:56 |
|
gdunbar joined #evergreen |
03:56 |
|
jcamins joined #evergreen |
03:56 |
|
berick joined #evergreen |
03:56 |
|
gmcharlt joined #evergreen |
03:56 |
|
fparks joined #evergreen |
03:56 |
|
bshum joined #evergreen |
03:56 |
|
tsbere__ joined #evergreen |
03:56 |
|
17SAAFEOK joined #evergreen |
03:56 |
|
mceraso joined #evergreen |
03:56 |
|
shadowspar joined #evergreen |
03:56 |
|
eeevil joined #evergreen |
03:56 |
|
phasefx2 joined #evergreen |
03:56 |
|
eby joined #evergreen |
03:56 |
|
ningalls joined #evergreen |
03:56 |
|
linuxhiker_away joined #evergreen |
03:56 |
|
wjr joined #evergreen |
03:56 |
|
pastebot joined #evergreen |
03:56 |
|
ktomita joined #evergreen |
03:56 |
|
paxed joined #evergreen |
03:56 |
|
rangi joined #evergreen |
03:56 |
|
berick__ joined #evergreen |
03:56 |
|
b_bonner joined #evergreen |
03:56 |
|
rri_ joined #evergreen |
03:56 |
|
zxiiro_ joined #evergreen |
03:56 |
|
dbwells_ joined #evergreen |
03:56 |
|
mtcarlson_away joined #evergreen |
03:56 |
|
ldwhalen joined #evergreen |
03:56 |
|
dkyle joined #evergreen |
03:56 |
|
pmurray joined #evergreen |
03:56 |
|
jeff_ joined #evergreen |
03:56 |
|
dbs joined #evergreen |
04:01 |
|
jeffdavis joined #evergreen |
04:16 |
|
gdunbar joined #evergreen |
04:18 |
|
gdunbar joined #evergreen |
04:18 |
|
jcamins joined #evergreen |
04:26 |
|
_bott_ joined #evergreen |
04:26 |
|
jeff joined #evergreen |
04:37 |
|
jeffdavis joined #evergreen |
04:43 |
|
jcamins joined #evergreen |
04:57 |
|
ktomita_ joined #evergreen |
04:57 |
|
paxed joined #evergreen |
04:57 |
|
dbs joined #evergreen |
05:00 |
|
paxed joined #evergreen |
05:00 |
|
paxed joined #evergreen |
05:02 |
|
linuxhikz joined #evergreen |
05:02 |
|
dbs joined #evergreen |
05:03 |
|
ktomita__ joined #evergreen |
05:05 |
|
ningalls joined #evergreen |
05:10 |
|
dbs_ joined #evergreen |
05:14 |
|
linuxhiker_away joined #evergreen |
05:17 |
|
shadowsp1r joined #evergreen |
05:18 |
|
goooood joined #evergreen |
05:18 |
|
phasefx2_ joined #evergreen |
05:19 |
|
wjr_ joined #evergreen |
05:32 |
|
eeevil joined #evergreen |
05:32 |
|
ningalls joined #evergreen |
05:39 |
|
pastebot0 joined #evergreen |
05:40 |
|
goooood joined #evergreen |
05:44 |
|
pastebot joined #evergreen |
05:57 |
|
edoceo joined #evergreen |
06:01 |
|
paxed joined #evergreen |
06:01 |
|
paxed joined #evergreen |
06:14 |
|
egbuilder joined #evergreen |
06:21 |
|
pinesol_green joined #evergreen |
06:22 |
|
eeevil joined #evergreen |
06:38 |
|
paxed joined #evergreen |
06:38 |
|
eeevil joined #evergreen |
06:39 |
|
paxed joined #evergreen |
06:40 |
|
tfaile joined #evergreen |
06:45 |
|
pinesol_green joined #evergreen |
06:45 |
|
phasefx2 joined #evergreen |
06:46 |
|
pinesol_green joined #evergreen |
07:05 |
|
phasefx joined #evergreen |
07:21 |
|
eeevil joined #evergreen |
07:21 |
|
mrpeters joined #evergreen |
07:24 |
|
tater joined #evergreen |
07:26 |
|
eeevil joined #evergreen |
07:26 |
|
ningalls joined #evergreen |
07:26 |
|
egbuilder joined #evergreen |
07:27 |
|
paxed joined #evergreen |
08:05 |
csharp |
ugh DDoS |
08:07 |
|
csharp joined #evergreen |
08:14 |
|
kmlussier joined #evergreen |
08:14 |
|
eby joined #evergreen |
08:18 |
|
fparks joined #evergreen |
08:26 |
|
rjackson-isl joined #evergreen |
08:33 |
|
fparks joined #evergreen |
08:34 |
|
jboyer-isl joined #evergreen |
08:37 |
|
jboyer-isl left #evergreen |
08:39 |
|
jboyer-isl joined #evergreen |
08:49 |
|
mmorgan1 joined #evergreen |
08:49 |
|
mmorgan1 left #evergreen |
08:52 |
|
collum joined #evergreen |
08:52 |
|
gmcharlt joined #evergreen |
08:52 |
|
phasefx2 joined #evergreen |
08:57 |
|
Dyrcona joined #evergreen |
09:03 |
|
ericar joined #evergreen |
09:06 |
|
RoganH joined #evergreen |
09:08 |
|
wjr joined #evergreen |
09:12 |
|
ningalls joined #evergreen |
09:13 |
|
Callender joined #evergreen |
09:13 |
|
timlaptop joined #evergreen |
09:14 |
jeff |
I find myself wanting to stage user settings and stat cats. |
09:14 |
jeff |
Has anyone else had a similar hankering? |
09:16 |
dbs_ |
@later tell hbrennan hah, that's what I get for going to a conference :) |
09:16 |
jboyer-isl |
No, but I'm curious of the workflow that led you to this urge. |
09:16 |
|
dluch joined #evergreen |
09:17 |
|
dexap joined #evergreen |
09:18 |
jeff |
jboyer-isl: wanting patrons to provide township ("Home Location" patron stat cat) and opt-in to e-mail newsletter (not storing this anywhere currently, but a user setting seems most logical) |
09:18 |
|
13WABR6ZD joined #evergreen |
09:19 |
|
timhome joined #evergreen |
09:19 |
|
egbuilder_ joined #evergreen |
09:20 |
|
goooood joined #evergreen |
09:21 |
jeff |
jboyer-isl: does that satisfy your curiosity? |
09:22 |
|
ningalls joined #evergreen |
09:22 |
jboyer-isl |
Kind of. I can see the benefit in allowing them to specify it, but I guess the staging is lost on me. You just want to keep the settings from being "official" until staff have reviewed them? |
09:23 |
|
RoganH_ joined #evergreen |
09:23 |
|
Callender joined #evergreen |
09:24 |
jeff |
no, i want users who don't exist yet (because they are staged users created with open-ils.actor.user.stage.create) to be able to have staged stat cats and staged user settings to go with their staged addresses, etc. |
09:24 |
jeff |
different from "pending" addresses on a real patron. |
09:24 |
|
gmcharlt_ joined #evergreen |
09:25 |
|
eeevil joined #evergreen |
09:25 |
jboyer-isl |
Hey, now it all makes sense. I haven't looked enough at how staged users works. That does sound like a good idea when looking at all of the pieces. :) |
09:25 |
|
phasefx2_ joined #evergreen |
09:25 |
jeff |
much more of this, and we'll need to invest in a redundant array of irc logging bots. either that, or distributed logging. :-) |
09:25 |
|
dluch2 joined #evergreen |
09:25 |
dbs_ |
distributed logging... I like that idea |
09:25 |
jeff |
jboyer-isl: the idea being that patrons when self-registering could specify certain stat cats or user settings. |
09:25 |
jboyer-isl |
One bot logs the join/parts, the other logs the good parts. :D |
09:26 |
jboyer-isl |
jeff: that's the part I wasn't following at first; it does sound like a nice enhancement. |
09:27 |
Dyrcona |
pinesol_green does not seem to be in channel this morning. |
09:27 |
jeff |
And now that I'm thinking about it, I'd also like to teach that API call to not create two addresses if they're identical. |
09:30 |
dbs_ |
jeff: if account #2 changes the address they share with account #1, does it ask the staff it account #1's address should also be changed? |
09:30 |
|
dexap joined #evergreen |
09:31 |
jeff |
dbs_: currently there is not obvious linkage from that direction, last i checked -- but i'm not talking about multiple users here. |
09:32 |
jeff |
dbs_: currently, when you stage a user, you supply a new address for billing and for mailing -- even if they're identical, the method creates a pair of addresses, and when creating the staged user the staff client uses both, so the account display shows one address as mailing and one as billing, both identical and both distinct. |
09:33 |
jeff |
(as opposed to having one address with the billing and mailing radio selectors selected) |
09:33 |
|
paxed joined #evergreen |
09:34 |
csharp |
someone is DDoS-ing freenode so pinesol_green's connection is timing out |
09:34 |
|
pinesol_green joined #evergreen |
09:34 |
csharp |
ah - there it is |
09:35 |
csharp |
@eightball is pinesol_green okay? |
09:36 |
pinesol_green |
csharp: About as likely as pigs flying. |
09:36 |
dbs_ |
pinesol_green++ |
09:36 |
dbs_ |
TRUTH |
09:38 |
Dyrcona |
csharp: I guess some people have nothing better to do. |
09:39 |
csharp |
yeah |
09:40 |
Dyrcona |
It's not like you could really extort freenode. |
09:42 |
jboyer-isl |
People need a motive to mess things up on the internet these days? |
09:42 |
Dyrcona |
People have never needed a motive to mess things up, ever. |
09:42 |
Dyrcona |
I call it "bored and stupid." |
09:43 |
jboyer-isl |
Yes, indeed. |
09:43 |
csharp |
our staff is having a discussion about closed dates and fine generation... from my reading of the code (open-ils.storage.action.circulation.overdue.generate_fines), closed dates are consulted at the time that code is run ("Is today a closed date for the circ lib?"), but not retroactively |
09:43 |
tbsere |
"Someone insulted me in a channel on that network! So now I am going to try and knock the network offline!" seems to be enough for a lot of people |
09:44 |
csharp |
as in "are any of the days this item has been overdue a closed date for the circ lib?" |
09:44 |
Dyrcona |
csharp: The fine generator expects to run every day, I think. |
09:44 |
tbsere |
csharp: Fine generation already skips ahead to "just after the last time a fine was entered", I believe |
09:45 |
csharp |
Dyrcona: yeah ours runs at 03:00 daily |
09:45 |
tbsere |
So in theory those dates were dealt with on previous runs |
09:45 |
Dyrcona |
tsbere: Yes, it does. I had some fun with that code in one of my billing branches. :) |
09:45 |
tbsere |
Closed dates being added *after* fine generation is a different story entirely.... |
09:46 |
jeff |
csharp: are you considering snow days? |
09:46 |
jeff |
csharp: or some other use case? |
09:46 |
csharp |
tbsere: (attempt at tab autocompletion reveals your changed nick ;-)) what appears to happen is that it adds the fines for the closed dates *after* the fact |
09:46 |
Dyrcona |
Well, that's like changing the circ rules after the fact. |
09:46 |
csharp |
like it fills in the missing days |
09:46 |
csharp |
jeff: yes - snow days are what touched this off |
09:47 |
|
paxed joined #evergreen |
09:47 |
csharp |
argh |
09:49 |
jeff |
csharp: assuming you can't get the closed date in before the fines in question are generated, i think the next best thing is going to be a new feature for forgiving (or voiding, using the soon-to-be style of voiding) the fines for the snow day. i don't even want to think about things like grace periods, though. |
09:49 |
jeff |
csharp: what you described sounds like a bug, possibly a result of a violation of the "nothing will be DUE on a closed date" norm. |
09:50 |
csharp |
jeff: yeah - it's funny that it took this look to uncover it |
09:51 |
csharp |
we had a request for a report for "fines accrued between date X and date Y" so a library could forgive fines that accrued over our MLK upgrade, and that is what revealed this |
09:51 |
jeff |
it's possible that a recent(-ish) optimization introduced it. it might not have been lurking long. :-) |
09:53 |
csharp |
hmm - may be f040814c |
09:54 |
csharp |
commit snarfing may be down because of the ddos |
09:54 |
jeff |
hrm. my "tangents" board in trello is quickly adopting a FINO queueing strategy. |
09:54 |
jeff |
First In, Never Out |
09:54 |
Dyrcona |
fun times on freenode |
09:55 |
jeff |
well, you need to qualify the hash with "commit" to get pinesol_green to even notice: commit f040814c |
09:55 |
csharp |
jeff: ah |
09:55 |
jeff |
AND the bot needs to be present. oof. :P |
09:55 |
csharp |
in any case: http://git.evergreen-ils.org/?p=Evergreen.git;a=commit;h=f040814c7507291c388a35a23c8878293a2524e4 |
09:55 |
jeff |
apologies to those also in #code4lib: |
09:55 |
jeff |
> Reflexively, I tried to pull up a HUD with my files on the Mansion fans we hoped to recruit. Of course, nothing happened. I'd done that a dozen times that morning, and there was no end in sight. |
09:58 |
csharp |
pinesol_green still trying to connect |
09:59 |
csharp |
it is seeing our chat somehow though |
10:00 |
|
jboyer-isl joined #evergreen |
10:01 |
|
Callender joined #evergreen |
10:01 |
bshum |
Technically we do have two loggings. serflog is the ilbot for new logs. |
10:02 |
|
artunit joined #evergreen |
10:02 |
csharp |
bshum: I'm tailing the nohup.out log and I see our messages here alongside the ones that say "connection to freenode timed out" |
10:02 |
dbs_ |
by "distributed logging" I imagine multiple bots coordinating their respective versions of "the truth" |
10:02 |
|
gmcharlt joined #evergreen |
10:03 |
bshum |
csharp: Might be pastebot sharing the nohup gile. |
10:03 |
bshum |
*file |
10:03 |
|
BigRig joined #evergreen |
10:03 |
csharp |
dbs: kind of like media outlets: "we log - you decide!" |
10:06 |
|
eeevil joined #evergreen |
10:06 |
|
phasefx2 joined #evergreen |
10:06 |
|
dluch joined #evergreen |
10:06 |
|
tsbere joined #evergreen |
10:06 |
|
ningalls joined #evergreen |
10:06 |
Dyrcona |
Welcome back! |
10:08 |
* tsbere |
notes that even when he appears to be connected lag is apparently an issue server to server right now on Freenode |
10:09 |
phasefx |
csharp: did you see the email to open-ils-general-owner? want me to handle it? |
10:11 |
|
collum joined #evergreen |
10:11 |
csharp |
phasefx: no, I hadn't seen it - please do - thanks! |
10:12 |
phasefx |
roger doger |
10:12 |
* csharp |
filters all of his mailman admin mail for all lists (EG + PINES) into a much neglected single folder ;-) |
10:17 |
|
artunit_ joined #evergreen |
10:19 |
|
pinesol_green joined #evergreen |
10:20 |
csharp |
bot musical chairs |
10:20 |
|
ericar_ joined #evergreen |
10:20 |
csharp |
bshum: oh - you're right about them sharing the file - I wasn't understanding what you meant before |
10:20 |
|
BigRig joined #evergreen |
10:20 |
csharp |
(nohup.out) |
10:20 |
|
pastebot0 joined #evergreen |
10:24 |
bshum |
csharp: Yeah it's one of the many little things I want to deal with someday. |
10:28 |
|
rfrasur joined #evergreen |
10:31 |
|
BigRig_ joined #evergreen |
10:33 |
|
artunit_ joined #evergreen |
10:40 |
|
goooood joined #evergreen |
10:40 |
|
ericar joined #evergreen |
10:40 |
|
dluch joined #evergreen |
10:40 |
|
mrpeters joined #evergreen |
10:40 |
|
phasefx2 joined #evergreen |
10:41 |
|
pinesol_green joined #evergreen |
10:41 |
|
ningalls joined #evergreen |
10:41 |
|
RoganH joined #evergreen |
10:41 |
|
tsbere joined #evergreen |
10:41 |
|
collum joined #evergreen |
10:41 |
|
yboston joined #evergreen |
10:42 |
|
pastebot joined #evergreen |
10:44 |
bshum |
Maybe I'll go hide out on OFTC today. |
10:45 |
jeff |
bshum: why aren't you there already? :-) |
10:46 |
bshum |
jeff: I just had to set back up the #evergreen channel on the other side ;) |
10:47 |
csharp |
we have an OFTC channel too? |
10:47 |
bshum |
I put one there awhile back when someone random showed up on OFTC to ask questions... I just made the topic a pointer back to here though. |
10:48 |
bshum |
I just leave myself in the channel though, just in case. |
10:48 |
|
rfrasur_ joined #evergreen |
10:48 |
bshum |
I should add the bot to watch it. |
10:48 |
|
Callender_ joined #evergreen |
11:11 |
|
_bott_ joined #evergreen |
11:11 |
|
jeff joined #evergreen |
11:14 |
|
mmorgan joined #evergreen |
11:20 |
|
tsbere_ joined #evergreen |
11:21 |
|
Callender joined #evergreen |
11:26 |
|
RoganH_ joined #evergreen |
11:27 |
|
ericar_ joined #evergreen |
11:27 |
|
rfrasur joined #evergreen |
11:45 |
|
jeff joined #evergreen |
11:45 |
|
jeff joined #evergreen |
11:46 |
jeff |
from Budapest to Bucharest. |
11:47 |
bshum |
Heh |
11:52 |
|
tater-laptop joined #evergreen |
11:57 |
|
ericar joined #evergreen |
11:59 |
|
rfrasur joined #evergreen |
12:00 |
|
eeevil joined #evergreen |
12:00 |
|
artunit joined #evergreen |
12:00 |
|
mtate joined #evergreen |
12:00 |
|
ningalls joined #evergreen |
12:00 |
|
dluch joined #evergreen |
12:00 |
|
fparks joined #evergreen |
12:00 |
|
pastebot joined #evergreen |
12:00 |
|
gmcharlt joined #evergreen |
12:00 |
|
phasefx2 joined #evergreen |
12:00 |
|
Dyrcona joined #evergreen |
12:00 |
|
Callender joined #evergreen |
12:00 |
|
yboston joined #evergreen |
12:01 |
gmcharlt |
that was painful |
12:02 |
|
mrpeters joined #evergreen |
12:02 |
|
pinesol_green joined #evergreen |
12:05 |
|
pinesol_green joined #evergreen |
12:06 |
|
egbuilder joined #evergreen |
12:08 |
bshum |
Poor staff |
12:08 |
bshum |
They've been using the force hold staff interface to make copy holds for patrons. |
12:08 |
bshum |
and wondering why the patrons don't ever get notified about the holds. |
12:08 |
csharp |
yowza |
12:09 |
pastebot |
"Dyrcona" at 64.57.241.14 pasted "Script Causing me problems" (61 lines) at http://paste.evergreen-ils.org/11 |
12:10 |
bshum |
I'm now not entirely sure why we don't include some lookups for notification preferences. But in the meantime, I'm instructing them not to use that interface for placing holds... |
12:12 |
gmcharlt |
Dyrcona: thanks, I'll take a look |
12:14 |
Dyrcona |
gmcharlt: Thanks. I guess all you have to do is get a response that is too large and then try something right after. |
12:15 |
Dyrcona |
Also, git log on OpenSRF master turns up some strange things, like two commits in a row with different hashes but identical commit messages. |
12:15 |
bshum |
'tis the crazy merge? |
12:16 |
|
RoganH joined #evergreen |
12:16 |
|
mmorgan joined #evergreen |
12:16 |
bshum |
I remember seeing some duplicate looking messages, etc. I think those resulted from the merge the other day. |
12:18 |
gmcharlt |
yep, it arose from the merge, but at least means that master and rel_2_3 remain the same until I cut 2.3.0-beta |
12:19 |
Dyrcona |
It seemed odd to me to merge rel_2_3 into master. I'd usually expect everything to go into master first, and then maybe merge the other direction. |
12:20 |
Dyrcona |
I don't thlnk that has anything to do with the problem that I'm seeing, though. |
12:20 |
gmcharlt |
nope |
12:20 |
Dyrcona |
I reverted back to the commit before that and still see it. |
12:21 |
gmcharlt |
Dyrcona: not seeing in rel_2_2 or 2.2.x, I assume? |
12:22 |
Dyrcona |
I didn't try those, but I can if you want. |
12:24 |
gmcharlt |
please |
12:26 |
Dyrcona |
will do |
12:29 |
|
snowball_ joined #evergreen |
12:31 |
|
mrpeters joined #evergreen |
12:34 |
Dyrcona |
Argh! rel_2_2 won't even start properly. I guess I need to reconfigure. |
12:35 |
|
smyers_ joined #evergreen |
12:35 |
Dyrcona |
Ah, never mind... been ages since I last used osrf_ctl.sh, and forgot the -l option. |
12:36 |
* bshum |
still isn't used to the new ways yet. |
12:37 |
|
Bmagic joined #evergreen |
12:39 |
Dyrcona |
I seem to be getting the same behavior with rel_2_2, which makes me wonder if a git clean was good enough. |
12:39 |
|
jihpringle joined #evergreen |
12:40 |
|
kbeswick joined #evergreen |
12:42 |
pastebot |
"Dyrcona" at 64.57.241.14 pasted "log from right after the fleshed search_action_circulation to login failure" (35 lines) at http://paste.evergreen-ils.org/12 |
12:45 |
eeevil |
Dyrcona: out of date or otherwise incorrect IDL (specifically WRT aou) in /openils/conf/ perhaps? |
12:47 |
|
sseng joined #evergreen |
12:47 |
Dyrcona |
This is a clean master installation on a fresh VM. |
12:47 |
Dyrcona |
I ran autogen.sh. |
12:47 |
Dyrcona |
I don't see how the fm_IDL.xml could be out of date, unless the one in master is out of date. |
12:48 |
* jeff |
pokes through the mailchimp API |
12:49 |
Bmagic |
Good day everyone. I have created a new EG server to replace an older one. The staff client on the old server was set to autoupdate. It errors and returns "Update XML file malformed (200)" The older machine was serving 2.3.3 and the new one is 2.4.1 |
12:49 |
eeevil |
Dyrcona: well, it's probably not and I'm just looking at the wrong error ;) ... XML stanza is too big. what does the ejabberd log say about that? |
12:51 |
eeevil |
of course, that seems unlikely to come from open-ils.cstore open-ils.cstore.direct.actor.org_unit.search {"parent_ou":null}, but that's what it's saying. the failure of that is causing the "0" in open-ils.storage open-ils.storage.permission.user_has_perm 604362, STAFF_LOGIN, 0 ... that's wrong unless your org tree has a top at id=0 |
12:55 |
Dyrcona |
eeevil: The ejabberd.log says nothing about the stanza too big. |
12:55 |
eeevil |
Dyrcona: that's annoying |
12:56 |
Dyrcona |
The ejabberd.log says very little other than a lot of messages about using "legacy authentication." |
12:56 |
Dyrcona |
min org unit id is 1. |
12:57 |
Dyrcona |
I *think* the max stanza size comes from the search_action_circulation, but is hanging around too long. |
12:58 |
pastebot |
"Dyrcona" at 64.57.241.14 pasted "precedes the pervious in the logs" (15 lines) at http://paste.evergreen-ils.org/13 |
13:00 |
|
jeffdavis joined #evergreen |
13:00 |
Dyrcona |
I don't get the behavior if I don't try to flesh record from acn. |
13:00 |
dbs |
Bmagic: hmm. can you check http://hostname/updates/update.rdf ? |
13:01 |
eeevil |
you could make it streaming using the substream=>1 option to the cstore call. but ISTM that the login is def failing too |
13:01 |
dbs |
Bmagic: there are a bunch of XML files associated with updates but that looks like the first one |
13:01 |
Dyrcona |
I tried using substream => 1 to no avail. |
13:02 |
Dyrcona |
Well, I still got no results, but maybe then the login didn't fail... |
13:02 |
dbs |
Bmagic: maybe also see if it's getting gzipped or anything: curl -I http://hostname/updates/update.rdf (doesn't seem to be on ours) |
13:02 |
csharp |
okay - testing something with our bots |
13:02 |
bshum |
Bmagic: I'd be curious what you did to create the updates on the new server. Did you copy the archives from the old server to the new one and then make updates-client there? |
13:03 |
|
serflog joined #evergreen |
13:03 |
|
Topic for #evergreen is now Welcome to the #evergreen library system channel! | We are publicly logged. | Large pastes at http://paste.evergreen-ils.org |
13:03 |
|
pinesol_green joined #evergreen |
13:03 |
eeevil |
Dyrcona: 1,996,814 is very close to 2000000. is that your max stanza size? |
13:03 |
|
hbrennan joined #evergreen |
13:03 |
Dyrcona |
Bmagic: You won't get autoupdate files unless the previous mar files were copied from the old server before build. |
13:04 |
Dyrcona |
eeevil: Yes, 2 million, it is. |
13:09 |
|
serflog joined #evergreen |
13:09 |
|
Topic for #evergreen is now Welcome to the #evergreen library system channel! | We are publicly logged. | Large pastes at http://paste.evergreen-ils.org |
13:09 |
jeff |
(daemon?) |
13:09 |
Dyrcona |
eeevil: FYI, I get the same behavior with substream => 1. |
13:09 |
eeevil |
imagine exactly one cstore backend, for simplicity. request comes in during run 1, result is too big. however, cstore doesn't wait around to hear about that. it just tosses the final result on the network and, because this is a stateless connection, goes back to its parent and waits for the next request |
13:11 |
|
pastebot joined #evergreen |
13:11 |
eeevil |
(actually, let's imagine 2 cstore backends that are used alternately) ... then, run 2 starts out with an auth request. same backend that sent the overlarge result gets the "complete" call, and the jabber layer of opensrf sees the error that's been waiting on the socket since we sent said result. |
13:11 |
|
pinesol_green joined #evergreen |
13:11 |
eeevil |
and disconnects from jabber |
13:12 |
eeevil |
now, we throw away opensrf messages that are sitting on the jabber socket when we go to process a request passed in from the listener |
13:13 |
bshum |
csharp++ # bot wrangling |
13:13 |
eeevil |
but, this isn't one of those, and it looks like we don't handle xmpp-level errors waiting for us on the backend's own socket very gracefully |
13:14 |
eeevil |
Dyrcona: by "same behavior" you mean you see an atomic search call, and get a "stanza too big" message? |
13:14 |
eeevil |
in the logs, I mean |
13:32 |
Dyrcona |
eeevil: Yes, exactly the same as without the substream. |
13:33 |
|
stevenyvr joined #evergreen |
13:34 |
|
rfrasur joined #evergreen |
13:44 |
RoganH |
jeff++ |
13:45 |
kmlussier |
RoganH, bshum: I won't be around on the 20th for our next web team meeting. Do either of you want to run the meeting or should I reschedule? |
13:46 |
bshum |
kmlussier: Unless there's a pressing need for something, maybe it's something we can push off to the conference? |
13:46 |
RoganH |
kmlussier: I can run it if you want but I agree with ben, I don't mind putting it off unless there's something pressing. I have to admit I'm slammed right now. |
13:47 |
* kmlussier |
looks for old action items. |
13:47 |
RoganH |
I'm jumping around so much I almost typed something in general chat about ladies of special services undergoing hormone therapy that would not have been appropriate here. |
13:51 |
kmlussier |
So where were you planning to type the hormone therapy thing that would have been more appropriate? |
13:51 |
|
stevenyvr joined #evergreen |
13:52 |
RoganH |
No, I went back to the place I meant to type into and said the thing that would be inappropriate here there. |
13:52 |
RoganH |
Being more appropriate? That's just crazy talk. |
13:53 |
Dyrcona |
Reminds me of a bash.org quote..... |
13:53 |
Dyrcona |
It ends with "When is it ever the right channel?" |
13:54 |
RoganH |
lol |
13:54 |
kmlussier |
RoganH, bshum: I don't think there is anything pressing in the action items. When I finish one of mine, I can share what I've done with the list. |
13:55 |
RoganH |
kmlussier: I think we can use the list to communicate in the interim. (not that a great deal seems to get accomplished on the list usually but there's always hope for this new fanged email thing) |
13:55 |
RoganH |
Anyway, off for a bit. |
13:56 |
kmlussier |
@love new fanged email thing |
13:56 |
pinesol_green |
kmlussier: The operation succeeded. kmlussier loves new fanged email thing. |
13:57 |
|
artunit joined #evergreen |
14:01 |
Bmagic |
bshum: I did not copy any files from the old server to the new server. The files to copy are /openils/var/web/xul/rel_2_3_3_x/* -R ? |
14:02 |
bshum |
I can't believe it's GSOC time again already. |
14:03 |
bshum |
Bmagic: No.... There are archive mar files that need to be copied to keep the proper autoupdate chain if moving to a different server.. |
14:03 |
bshum |
I don't have the path of the tip of my tongue... Hmm |
14:03 |
eeevil |
Dyrcona: that's unfortunate, since is suggests that substream is not working for you. I've not seen that before. only thing I can suggest (assuming my theory is correct and there are not larger issues with the install) is to increase max_stanza_size |
14:03 |
Bmagic |
interesting, I'm learning something new |
14:05 |
Dyrcona |
eevil: I get no problem if I switch to using open-ils.cstore.direct.action.circulation.search with recv() in a loop, i.e. a real streaming request. |
14:05 |
Dyrcona |
oops, missed an e in eeevil |
14:06 |
Dyrcona |
this is just something I'm messing with as a potential bit of code to fix lp 1208875. |
14:06 |
pinesol_green |
Launchpad bug 1208875 in Evergreen "OPAC: My Account: Download Checkout History CSV breaks when there are a large number of items in the history" (affected: 3, heat: 16) [Medium,Confirmed] https://launchpad.net/bugs/1208875 |
14:06 |
Dyrcona |
My thought is just do the fleshed retrieval and spit the CSV directly out of TPAC, rather than going through action_trigger. |
14:06 |
bshum |
Bmagic: /openils/var/updates/archives |
14:06 |
bshum |
On my server, that's where the *.mar files end up |
14:06 |
Dyrcona |
However, now I have something else to do. |
14:07 |
Bmagic |
bshum: thank you! And then recompile win-client? |
14:07 |
bshum |
I would copy those from the old server to the new server so that your update command has something to build off of. |
14:07 |
eeevil |
Dyrcona: gotcha. that'll be good to fix, so, thanks ;) would you mind pasting your substream version? I'd like to trace the code to see what's going on there. |
14:07 |
Dyrcona |
eeevil: np. I may be doing it wrong. |
14:09 |
bshum |
Bmagic: What command are you using to "recompile win-client"? |
14:10 |
bshum |
And are you passing off the same autoupdate variable? (will the new server use the same hostname as the old one later?) |
14:10 |
Bmagic |
bshum: How about make rebuild && make win-client && make updates-client |
14:10 |
bshum |
That's more of a "how are you changing servers" |
14:11 |
pastebot |
"Dyrcona" at 64.57.241.14 pasted "for eeevil:" (59 lines) at http://paste.evergreen-ils.org/10 |
14:12 |
Bmagic |
bshum: same DNS name, different IP |
14:12 |
Bmagic |
bshum: Different OS, newer version of EG and opensrf |
14:12 |
eeevil |
Dyrcona: ah! yes, substream does not go inside the extra blob that gets passed to cstore, it's a second top-level option that CStoreEditor consumes directly |
14:12 |
Dyrcona |
Ironically, you don't actually have to login to use cstore like that if you're on the back end side of things. |
14:13 |
Dyrcona |
Ah, ok, then. |
14:13 |
Dyrcona |
I'm having better luck with OpenSRF::AppSession now anyway. |
14:13 |
eeevil |
so it would be $e->search_blah([$query,$flesh_blob],{substream => 1}) |
14:14 |
Bmagic |
bshum: The STAFF_CLIENT_BUILD_ID will probably be different for sure (do I need to make them equal for the update to work?) |
14:14 |
bshum |
Bmagic: I'm unsure, but I think doing "make rebuild" without variables could damage the client build if there's supposed to be autoupdate involved. |
14:15 |
bshum |
Or rather, create a client without autoupdate enabled |
14:15 |
Bmagic |
I left a couple of make commands out |
14:15 |
Bmagic |
make AUTOUPDATE_HOST=myhost.domain.com STAFF_CLIENT_STAMP_ID=rel_2_4_1 |
14:17 |
Bmagic |
bshum: make STAFF_CLIENT_BUILD_ID=rel_2_4_1 install && make rebuild && make rigrelease && make win-client && make updates-client |
14:17 |
bshum |
I think those are in the wrong order. |
14:17 |
bshum |
You probably only need to "make rebuild" after doing a "make rigrelease" |
14:18 |
Bmagic |
bshum: right on |
14:18 |
bshum |
And if you make rebuild without passing the values for AUTOUPDATE_HOST, it might not remember that. (I don't remember now what happened to me, maybe I'm remembering wrong) |
14:18 |
bshum |
If it was me, I would make rigrelease first, then do your usual make install dance |
14:18 |
tsbere |
bshum/Bmagic: make rigrelease rebuild updates-client |
14:18 |
bshum |
Or there's that |
14:19 |
tsbere |
or: make AUTOUPDATE_HOST=hostname rigrelease rebuild updates-client |
14:19 |
tsbere |
Or just use the updateshost (--with-updateshost=?) configure option and move on.... |
14:22 |
bshum |
Unrelated, action trigger granularity is making my head hurt. That and suddenly it seems that I have so many action trigger runner failures lately. |
14:23 |
tsbere |
bshum: Granularity is easy. <_< |
14:23 |
bshum |
I must be doing something wrong then. |
14:24 |
tsbere |
I am willing to help you figure it out |
14:24 |
bshum |
Okay... so... |
14:24 |
bshum |
I seem to have cron entries with commands like: cd /openils/bin && /usr/bin/perl ./action_trigger_runner.pl --osrf-config /openils/conf/opensrf_core.xml --process-hooks --granularity daily-lost |
14:24 |
bshum |
Which I think would grab anything with "daily-lost" as the granularity |
14:25 |
tsbere |
yep. But not Daily-Lost. |
14:25 |
bshum |
But then, when the --run-pending comes up |
14:25 |
bshum |
It doesn't catch them |
14:25 |
bshum |
And I have to do something like --run-pending --granularity daily-lost |
14:25 |
bshum |
To get it |
14:25 |
bshum |
But doing that skips the others and the null ones |
14:26 |
bshum |
Otherwise, things just stay collected |
14:26 |
bshum |
It seems, or maybe I'm not seeing it right |
14:26 |
* tsbere |
should double-check MVLC's events |
14:27 |
bshum |
I end up having to have multiple --run-pending --granularity daily-* to catch all the types I have. |
14:27 |
bshum |
Now, the weird thing |
14:27 |
bshum |
Is that I'm trying to see how the current slew of "collected" stuff is stuck. |
14:28 |
bshum |
And those seem to be mainly hold notifications now. |
14:28 |
bshum |
So I'm not sure if it's just a trailing stuck event, or if my fiddling with granularities that has broken something |
14:29 |
tsbere |
Well, unless you have --granularity-only included run-pending should be grabbing both no-granularity and that particular granularity |
14:29 |
bshum |
That's weird then |
14:29 |
bshum |
I wonder why it doesn't seem to do that. |
14:29 |
tsbere |
We do that with a password thing that runs every five minutes. |
14:31 |
tsbere |
bshum: Did you know you can combine run-pending and process-hooks onto one command line? <_< |
14:31 |
bshum |
Yes. |
14:31 |
bshum |
Though in my case, that'll definitely generate some sort of warning. |
14:31 |
bshum |
I think |
14:31 |
tsbere |
? |
14:31 |
bshum |
Gah, hates triggers |
14:32 |
bshum |
Well it'll do a cron already running message I mean |
14:32 |
bshum |
If the time when all that stuff coincides |
14:32 |
bshum |
But I get that already with certain things |
14:33 |
* tsbere |
has a fairly straightforward set of crontab entries and MVLC isn't seeing major problems |
14:34 |
* tsbere |
should probably figure out the 2277 events that are in limbo some day, though |
14:38 |
Dyrcona |
That reminds me. I should set up the fine generator on my dev VM again. |
14:38 |
Dyrcona |
I think I'll go back to OpenSRF master, while I'm at it. |
14:40 |
|
kbutler joined #evergreen |
14:48 |
|
kbeswick joined #evergreen |
15:37 |
gmcharlt |
eeevil: Dyrcona: regarding your discussion, would I be correct in thinking that the issue is not a blocker for osrf-2.3.0-beta? |
15:38 |
Dyrcona |
gmcharlt: I would say no, since I can reproduce it with 2.2. |
15:38 |
gmcharlt |
no, I'm not correct, or no, it's not a blocker? ;) |
15:40 |
eeevil |
gmcharlt: no, it's not a blocker. |
15:41 |
eeevil |
a too-small max_stanza_size has always been a problem, this is just the first time I've personally analyzed the specific failure path seen here |
15:48 |
berick |
am i crazy or does upgrade 0854 not have a corresponding change in 002.schema.config.sql? |
15:49 |
|
RBecker joined #evergreen |
15:50 |
bshum |
It does seem to be missing. |
15:50 |
berick |
i'll push a tiny fix if someone wants to signoff |
15:50 |
bshum |
Meh, just push it in. |
15:50 |
bshum |
:) |
15:51 |
bshum |
Or I can, whichever |
15:51 |
* bshum |
pokes jeff with his git blame stick |
15:51 |
* jeff |
looks up |
15:52 |
bshum |
I think when you stamped 0854, berick was just reminding us to make sure the entry in 002.schema.config.sql gets updated with the newest entry too. |
15:53 |
berick |
user/berick/0854-upgrade-stamp |
15:53 |
* jeff |
grabs branch |
15:54 |
berick |
gracias |
15:55 |
jeff |
berick++ |
15:55 |
jeff |
thanks for the catch. |
15:56 |
jeff |
that was my first upgrade script stamp, and i missed it. |
15:57 |
jeff |
bshum: thanks for getting my attention. won't so easly forget next time. :-) |
15:58 |
pinesol_green |
[evergreen|Bill Erickson] Bumping base schema version to match latest upgrade - <http://git.evergreen-ils.org/?p=Evergreen.git;a=commit;h=bd17b2f> |
15:58 |
jeff |
at first i thought the issue was that the new org unit settings were only in the upgrade script and not in the base schema. i was pretty sure that wasn't the case. :-) |
15:58 |
berick |
jeff: heh, well, it's harmless. |
15:58 |
bshum |
jeff++ berick++ |
15:58 |
berick |
... this time.. *omimous music* |
15:59 |
Dyrcona |
gmcharlt: Sorry, stepped away, but I meant what eeevil said. |
16:00 |
gmcharlt |
Dyrcona: thanks -- I figured as much, but it is not to the disbenefit of anyone that there be no maximization of confusion on this point |
16:00 |
gmcharlt |
;) |
16:00 |
Dyrcona |
:) |
16:04 |
bshum |
Bleh |
16:04 |
bshum |
My A/T runner keeps dying out after long periods of waiting |
16:04 |
bshum |
And then jabber noisse. |
16:04 |
bshum |
*noise |
16:05 |
bshum |
Funny talking about ejabberd's max_stanza_size and now I'm getting lots of death in that area today :( |
16:15 |
jeff |
bshum: it KNOWS |
16:19 |
sseng |
quick question about the memory leak patches (1086458). I see that it has been backport to 2.4. For this situation with regards to 2.4 release and the memory leak, does this mean that, for example, the memory leak patch only works with rel_2_4? That is to say, it will not work with, say, (tags/rel_2_4_1), etc? thanks! |
16:19 |
sseng |
https://bugs.launchpad.net/evergreen/+bug/1086458 |
16:19 |
pinesol_green |
Launchpad bug 1086458 in Evergreen 2.4 "Staff client memory leaks in 2.3 and later" (affected: 11, heat: 76) [High,Fix released] |
16:21 |
bshum |
sseng: I'm not sure I understand what you're asking |
16:22 |
bshum |
If you're asking what I think you are, then I would imagine that any changes made to rel_2_4's branch after the tag branch rel_2_4_1 was split off would only work if the changes in between didn't touch on the same files. |
16:23 |
sseng |
bshum: np, let me give it another attempt =). For example, there were four commits for the memory leak applied to rel_2_4. These commits, will they work against say, a tag rel_2_4_1? |
16:23 |
sseng |
bshum: yep, that's exactly what I meant. |
16:24 |
bshum |
Personally I'm not a fan of picking changes with gaps between them, since I wouldn't know if the gap code builds on the changes or not. |
16:24 |
Dyrcona |
sseng: Your best bet would be to upgrade to the lastest 2.4. |
16:25 |
sseng |
bshum: Dyrcona: alright, that makes sense. thanks!! |
16:26 |
bshum |
jeff: It really does... :( |
16:35 |
|
gdunbar joined #evergreen |
16:36 |
|
berick_ joined #evergreen |
16:38 |
|
jeffdavi1 joined #evergreen |
16:38 |
|
bradl_ joined #evergreen |
16:41 |
|
BigRig joined #evergreen |
16:49 |
|
stevenyvr joined #evergreen |
16:51 |
|
jeffdavis joined #evergreen |
16:52 |
|
akilsdonk joined #evergreen |
17:08 |
|
mrpeters left #evergreen |
17:10 |
|
mmorgan left #evergreen |
17:13 |
bshum |
Does this mean anything to anyone? |
17:13 |
bshum |
2014-02-03 16:34:47 guitar open-ils.trigger: [WARN:16204:Application.pm:590:] open-ils.trigger.event.find_pending_by_group: Use of uninitialized value $ident_value in hash element at /usr/local/share/perl/5.14.2/OpenILS/Application/Trigger.pm line 778. |
17:13 |
bshum |
I see that spring up a whole bunch of times in succession |
17:26 |
dbwells |
bshum: Just looking at the code, maybe you have a group_field value set to something which isn't an actual field on targets for that event? |
17:29 |
bshum |
dbwells: Bleh, that's disconcerting.... off to check all the events again. |
17:29 |
bshum |
Something is stalling out the action_trigger_runner and ejabberd is closing out the connection before it can finish the task. But then the process is left hanging with no end. |
17:30 |
bshum |
Sigh |
17:37 |
|
loveirc-bot1 joined #evergreen |
17:44 |
sseng |
another question: is it a true statement that, whatever is in tag rel_2_4_1 is in rel_2_4? (that is to say, rel_2_4 keeps moving forward, and tags such as rel_2_4_1, rel_2_4_2 is a snapshot of rel_2_4 at some point, so that rel_2_4 will contain tags rel_2_4_x, but not vice versa) |
17:44 |
gmcharlt |
sseng: in essence, yes |
17:45 |
gmcharlt |
tags/rel_2_4_1, say, is not a true Git tag, it's a branch |
17:46 |
gmcharlt |
but it consists of what ever is in rel_2_4 at the time tags/rel_2_4_1 was branched off, plus a few commits that contain release-specific stuff, such as bumping up version numbers, the change log, release notes for 2.4.1, and so forth |
17:48 |
sseng |
gmcharlt: ok, that makes sense, thanks! |
17:49 |
bshum |
Those changes tend to make it back to the main branch eventually. Usually forward ported at some point after the release has been made official. |
17:50 |
bshum |
Hey cool, the upgraded opensrf master understands better when the trigger side is dead and gone and doesn't just hang its head waiting for nothing when trying to shut down. |
17:50 |
bshum |
That's nice. |
17:50 |
gmcharlt |
that is most consequential for the schema upgrade scripts |
17:50 |
bshum |
Downside, I still can't figure out why the trigger is dying in the first place :( |
17:51 |
|
dcook joined #evergreen |
17:51 |
|
book` joined #evergreen |
17:59 |
|
smyers__ joined #evergreen |
18:01 |
kmlussier |
Is tomorrow still the deadline for targeting bugs to 2.6 beta? |
18:12 |
bshum |
kmlussier: dbwells would be the big decider, but I believe I heard inklings that there may be some deadline extention. |
18:12 |
kmlussier |
bshum: Thanks! |
18:15 |
dbwells |
kmlussier: Yes, the deadline is being extended. The exact amount is not yet determined, but the new deadline will likely be Friday or Tuesday. Monday deadlines just don't work :) I'll be sending a brief email to the list about the extension later tonight. |
18:15 |
kmlussier |
dbwells: Good to know. Thank you! |
18:20 |
bshum |
dbwells: Ahh, it might be our SMS notifications trigger. It groups by looking for the sms_notify field which is usually a phone number entered. |
18:20 |
bshum |
Maybe if it's blank or otherwise not right, it gives those unhappy errors. |
18:21 |
bshum |
dbwells: Thanks for the pointers, I'll keep digging. |
18:21 |
bshum |
And also murdering my trigger process over and over as I inch through a backlog of A/T events that I'm manually queuing bit by bit as I investigate |
18:22 |
bshum |
Maybe I'm missing some sort of validator that checks this stuff |
18:22 |
bshum |
It feels like there ought to be something that just doesn't run through the whole notice if there isn't even a proper field to begin with. |
18:28 |
|
loveirc-bot1 joined #evergreen |
18:28 |
dbwells |
bshum: I would not expect an empty value to give an "uninitialized value" type warning. An empty string is actually a valid hash key in Perl (though not a very good one :) |
18:29 |
bshum |
Heh, okay. |
18:29 |
bshum |
Well, once processed, those A/T events get turned into "invalid" state anyways. |
18:29 |
bshum |
Probably because they're blank like that. |
18:29 |
bshum |
But depending on grouping |
18:30 |
bshum |
It's trying to group all those invalids into one giant action maybe? |
18:30 |
bshum |
And that giant action is killing things |
18:30 |
* bshum |
is just shooting at the darkness now |
18:36 |
dbwells |
bshum: If you are willing to deal with side effects, have you tried running 'open-ils.trigger.event.find_pending_by_group' directly, by itself? If that fails all by itself, it might help filter out the noise when troubleshooting. |
18:38 |
bshum |
dbwells: Right now I'm requeuing broken A/T events in batches of 50 (was 100, now I'm down to 50) inching forward and checking logs as it goes. |
18:38 |
bshum |
What would happen is that when it reaches a particular pain point, the ejabberd connection for listener would just die out and the whole process would just hang. |
18:38 |
bshum |
And nothing other than killing the action_trigger_runner and then restarting services would bring me back |
18:39 |
bshum |
I can try the individual service call |
18:39 |
bshum |
But I'm not too practiced at that. |
18:39 |
dbwells |
If you do, it leaves whatever it touches in a "collected" state which you will probably need to undo to get them to run the normal way. |
18:40 |
dbwells |
There might be more effects, so if you feel like your process is working, don't listen to me. |
18:40 |
bshum |
Well I think this will help me clear the backlog (very slowly) |
18:40 |
bshum |
But I do need to get to the bottom of what's different now |
18:40 |
|
smyers_ joined #evergreen |
18:43 |
dbwells |
bshum: Do you know if you have the parallel event collecting turned on? |
18:43 |
bshum |
dbwells: Yes, we do. That's something I've been wondering about. |
18:43 |
bshum |
I was just adjusting our children for trigger to be much higher than before. But I wondered if maybe it's too much parallel (ours is set to 3) |
18:44 |
bshum |
Or too little? |
18:44 |
bshum |
Hehe |
18:45 |
dbwells |
I have no idea if it is related at all, but it's at least something simple to try (either turning off or turning up) and see if the behavior changes. |
18:45 |
bshum |
Certainly worth a shot |
18:45 |
bshum |
Since I'm poking dangerously anyways |
18:48 |
dbwells |
It's the only way to poke. Nothing's going to happen until you wake that bear up! |
18:50 |
bshum |
Upping to 10 for collector/reactor didn't help. Though it did process through the remaining 222 events very quickly. |
18:50 |
bshum |
And then crashed trigger |
18:50 |
bshum |
:) |
18:50 |
bshum |
I commented it out this time |
18:50 |
bshum |
And will see what happens running those same 222 triggers that failed again |
18:52 |
bshum |
Interesting.... |
18:52 |
bshum |
Success! |
18:52 |
bshum |
So the parallel thing is unhappy |
18:53 |
bshum |
I'll leave it disabled overnight and see what happens by morning after the next round of A/T events get created. |
18:54 |
dbwells |
That is very interesting, indeed. Seems to me to point to something fairly low level, maybe a recent OpenSRF change or something. |
18:54 |
bshum |
Well, we are using OpenSRF master. |
18:54 |
dbwells |
I am assuming you have had parallel on for a while. |
18:55 |
bshum |
Yeah it's untouched and comes that way out of the box for opensrf.xml |
18:56 |
dbwells |
Hmm, I see those lines commented on my test box, but I may not have updated opensrf.xml in a while. |
18:57 |
dbwells |
In any case, I am glad you found a possible workaround for now. |
18:59 |
bshum |
dbwells: Yep I'll check the systems tomorrow and see if there's a backlog again of broken collected state A/T events. |
18:59 |
bshum |
Thanks for the advice! dbwells++ |
19:41 |
hbrennan |
If anyone's still here, I'm having a "moment" and can't find where to change the text on the OPAC login screen that says "If this is your first time logging in, please enter the last 4 digits of your phone number. Example: 0926" |
19:41 |
hbrennan |
Unless I'm blind, it's not in openils/var/templates/opac/parts/login/form with the rest of the text I can edit |
19:43 |
hbrennan |
Just kidding, found it in the OTHER templates folder |
19:43 |
hbrennan |
Thanks for the help, empty IRC chat! |
19:47 |
bshum |
Heh |
19:47 |
bshum |
The void helps now and then. |
19:47 |
hbrennan |
Sometimes I just need the pressure |
19:48 |
hbrennan |
Too bad now that I've found it I can't edit it |
19:48 |
hbrennan |
darn |
19:48 |
bshum |
Blah! |
19:48 |
bshum |
Permission issue? |
19:48 |
hbrennan |
yes |
19:48 |
hbrennan |
They have me on lockdown |
19:48 |
bshum |
Boo |
19:49 |
hbrennan |
custom template folder only, plus individual files I beg for |
19:50 |
hbrennan |
When we had the IT manager who didn't care, I had access to the whole server! |
19:52 |
hbrennan |
someday... |
22:56 |
|
dluch joined #evergreen |
22:59 |
|
dluch2 joined #evergreen |