Time |
Nick |
Message |
05:30 |
|
JBoyer joined #evergreen |
06:00 |
pinesol |
News from qatests: Failed Installing Angular web client <http://testing.evergreen-ils.org/~live//archive/2022-03/2022-03-22_04:00:05/test.29.html> |
06:36 |
|
rfrasur joined #evergreen |
06:50 |
|
rjackson_isl_hom joined #evergreen |
07:58 |
|
collum joined #evergreen |
08:33 |
|
jvwoolf joined #evergreen |
08:37 |
|
mmorgan joined #evergreen |
08:39 |
|
mantis joined #evergreen |
09:05 |
|
Dyrcona joined #evergreen |
09:07 |
* Dyrcona |
cranks up the Led Zeppelin and gets to work. |
09:07 |
Dyrcona |
When I listen to Led Zeppelin, the whole neighborhood listens to Led Zeppelin. :P |
09:10 |
mmorgan |
:) |
09:10 |
* mmorgan |
would take that over "March of the Leaf Blowers" |
09:13 |
Dyrcona |
:) |
09:14 |
jvwoolf |
I'm listening to my husband read to my kid while he coughs. Very wholesome. |
09:15 |
jvwoolf |
(Kid is coughing, not husband) |
09:15 |
mmorgan |
:) |
09:28 |
|
rfrasur joined #evergreen |
09:29 |
Dyrcona |
Reminds me of reading with my daughter when she was a toddler. Our favorite book was The Rattletrap Car. At one point, we both had it memorized and could recite it at will. After c. 20 years... not so much. |
09:30 |
Dyrcona |
"It didn't go fast and it didn't go far, but we made it to the lake in our rattle-trap car." |
09:30 |
jvwoolf |
Aw. I'm not familiar with that one. |
09:31 |
jvwoolf |
We all have The Very Hungry Caterpillar memorized |
09:32 |
Dyrcona |
Yeah. The Very Hungry Caterpillar is a classic. My favorite(s) when I was little were Where the Wild Things Are and Big Joe's Trailer Truck. |
09:33 |
JBoyer |
I initially read that as memo-ized. "In Re: food, we need to talk" |
09:34 |
Dyrcona |
JBoyer: I was thinking memoized as I typed it, as in (defun memoize (val)...) ;) |
09:36 |
* Dyrcona |
cranks "Communication Breakdown" to 11. |
09:36 |
Dyrcona |
(Not really....) :) |
09:37 |
jvwoolf |
JBoyer++ |
09:37 |
jvwoolf |
"To do: eat 2 pears on Tuesday" |
09:39 |
jvwoolf |
So our action trigger purge didn't complete again over the weekend, even after doing a full vaccuum on the action_trigger.event and action_trigger.event_output tables. |
09:40 |
jvwoolf |
mmorgan: Did you say that yours got hung up on purging the event output? |
09:41 |
* mmorgan |
tries to remember. |
09:41 |
Dyrcona |
jvwoolf: If you delete cascade on action_trigger.event_output, it deletes from action_trigger.event, too. You could purge event output with a reasonable limit of 1 million or so, and delete them over time. |
09:44 |
jvwoolf |
Dyrcona: Thank you, I was hoping to be able to break this up into batches. Could that be done while the system is up, do you think? |
09:45 |
mmorgan |
Our sysadmin did the original purge, but the action_trigger.purge_events() function has a good bit of info at the top. |
09:45 |
mmorgan |
https://git.evergreen-ils.org/?p=Evergreen.git;a=blob;f=Open-ILS/src/sql/Pg/400.schema.action_trigger.sql;hb=5b7187ce79dd13cc567d6dee5c46554333b9b5e1#l337 |
09:46 |
mmorgan |
jvwoolf: I think batches could be done while the system is up, but would defer to Dyrcona :) |
09:47 |
JBoyer |
jvwoolf, what version are you on? I forget when the indexes on action_trigger.event (all of the output fields) went in, but they were basically required to make that function work on a large-ish system. |
09:47 |
jvwoolf |
JBoyer: We're on 3.6.5 |
09:49 |
jvwoolf |
JBoyer: What's funny that we have tested this on many a test database with our production data, and it always completes in 15-20 minutes. |
09:51 |
Dyrcona |
jvwoolf: You should be able to purge everything while the system is up. I'm suggesting just doing: DELETE FROM action_trigger.event_output WHERE .... LIMIT {some number}; It should cascade to delete from action_trigger.event. |
09:51 |
mmorgan |
jvwoolf: Could any action triggers be running when you are attempting the purge? |
09:52 |
jvwoolf |
mmorgan: No, we took Evergreen offline and disabled all cron jobs |
09:52 |
Dyrcona |
jvwoolf: You may have a hardware or other issue with your production database. It may have an optimization setting that's not ideal for large deletes. |
09:52 |
* mmorgan |
nods |
09:53 |
mmorgan |
Dyrcona: jvwoolf: I think trying to delete event_output was where we ran into issues with our original purge. It was extremely slow. |
09:55 |
jvwoolf |
Dyrcona: We deleted those 30 million deleted URI call numbers a few weeks ago and that seemed to work out fine |
09:55 |
Dyrcona |
Y'know, looking at the DELETE documentation, it may have to be DELETE FROM action_trigger.event_output WHERE id IN (SELECT .... FROM .... WHERE ... LIMIT (some number}); |
09:56 |
jvwoolf |
Dyrcona++ |
09:56 |
JBoyer |
Ah, looks like the indexes went in 3.3.1. But, if you don't have the indexes in commit 3b3c4f8deb6104427397e19ccfb77043b812de88 that's a problem. (Even the ones on fields that are "never" used. All 3 are required) |
09:56 |
Dyrcona |
jvwoolf: What JBoyer said about indexes.... :) |
09:56 |
JBoyer |
I guess that plugin doesn't do it anymore. Anyway, lp 1778940 |
09:56 |
pinesol |
Launchpad bug 1778940 in Evergreen 3.2 "Indexes needed on A/T event output fields" [Medium,Fix released] https://launchpad.net/bugs/1778940 |
09:57 |
Dyrcona |
JBoyer: I think the commit plugin only works on the master branch.... |
09:57 |
Dyrcona |
It also may not be loaded. |
09:57 |
jvwoolf |
JBoyer++ |
09:57 |
JBoyer |
Oh, that makes sense. I thought it was also having trouble at some point, maybe both, |
09:58 |
* Dyrcona |
refreshes his memory on copy (....) to csv.... 'cause newer psql with the --csv option has spoiled me. |
09:59 |
Dyrcona |
Yeah, could just be busted after the recent upgrade. Possibly a Python 2 to 3 issue. |
09:59 |
jvwoolf |
Oh yep. |
09:59 |
jvwoolf |
According to Pg Admin, 0 indexes on action_trigger.event_output |
10:00 |
jvwoolf |
Same thing on the test databases where it worked, though |
10:02 |
jvwoolf |
Oh wait I take it back |
10:04 |
jvwoolf |
I looked in the wrong place, we have all of the indexes listed here on action_trigger.event |
10:05 |
JBoyer |
It would be weird if you didn't, but I was hoping that would find something. In that case, maybe compare the indexes on both that and event_output on test and prod and see if anything is different. |
10:06 |
Dyrcona |
Well, sometimes a DB upgrade script slips through the cracks. |
10:09 |
* Dyrcona |
is looking into a report that a patron got spammed with a call number via sms, so also poking around action_trigger.event at the moment. |
10:10 |
jvwoolf |
JBoyer: They look the same. Our process is not to reload test databases anyway, we replace them with the most recent dump of production we have when we need to update them. So our test database *should* always be the same as production. |
10:10 |
jvwoolf |
We do replication on production, that's the only difference |
10:11 |
Dyrcona |
Replication can slow large deletes and is a good reason to limit them. |
10:12 |
Dyrcona |
So, looks like the patron sent themselves the same call number 729 times over a 4-day period. I think I'll check the apache and/or gatweay logs. |
10:12 |
JBoyer |
Ah, and depending on settings and what's happening on the replicant it can cause delays in large commits. Though I wouldn't really expect that for a/t stuff. |
10:14 |
Dyrcona |
jvwoolf: Are you using streaming replication or log shipping or both? We use both. We have had occasions where something went pathological and clobbered the replicant cause it to fill either the log shipping area or the Pg partition and crash Pg. |
10:15 |
JBoyer |
jvwoolf, something that may be worth trying is to time deleting a single event and then its associated output with explain analyze to see where all of the time is being spent. It's possible the indexes aren't being used or need to be rebuilt. |
10:16 |
jvwoolf |
Dyrcona: Looks like we're using streaming replication |
10:17 |
jvwoolf |
Dyrcona++ JBoyer++ |
10:17 |
jvwoolf |
This has been a very helpful |
10:19 |
Dyrcona |
Look at that! Apache log entries corresponding with the a/t events. |
10:36 |
Dyrcona |
So, not an Evergreen bug... :) |
10:40 |
|
Christineb joined #evergreen |
11:00 |
Dyrcona |
I'm told the request was made once from an iPad in a "kiosk." So.... potential bug on the iPad/kiosk software. |
11:23 |
|
collum joined #evergreen |
11:36 |
|
jihpringle joined #evergreen |
12:06 |
miker |
jvwoolf: I'd add to JBoyer's thoughts that a mass delete like the purge function does, esp with an anti-join like that, likely /won't/ use indexes. I think this was at least hinted at before, but one thing you could do would be to CREATE TABLE purge_events AS SELECT id FROM atev WHERE complete_time < (NOW() - '6 months'); -- or whatever -- and then DELETE ... WHERE ... (SELECT id FROM purge_events ORDER BY id LIMIT xxx OFFSET yyy); -- repeat until |
12:06 |
miker |
all tables are wiped |
12:08 |
jvwoolf |
miker++ |
12:10 |
jvwoolf |
A challenge we have is that we want to purge some events and not others, we're paranoid about acq-related events |
12:11 |
jvwoolf |
So the function let us do that by consulting the retention interval field in the event def, but it seem like we could write some SQL to achieve the same thing |
12:11 |
miker |
ah, well, then that should let you target (heh) just the events you want. just select events where the definition's hooks are the ones you know you want to purge |
12:12 |
miker |
once you've cleared the backlog, things should be smoother |
12:32 |
Dyrcona |
Yeah, I'd definitely use the retention intervals in the query to purge the output. |
12:32 |
Dyrcona |
I elided all of that because I don't know what criteria you'd want exactly. |
13:10 |
|
rfrasur joined #evergreen |
13:13 |
Dyrcona |
@decide filed or field |
13:13 |
pinesol |
Dyrcona: go with field |
13:23 |
|
jihpringle joined #evergreen |
13:49 |
|
JBoyer joined #evergreen |
14:12 |
jeff |
This might be a weird one: does anyone here have a favorite workflow for physical library card distribution? I'm talking about "more than 100, less than 1000 cards", physical envelopes with names on them containing literature about the library and a library card. |
14:13 |
jeff |
I'm leaning toward (roughly) "print two labels, one to go on the envelope, one to go on the library card, scan the card and the label to establish a link between "this envelope received this card", batch create cards based on that data, spot check, distribute. |
14:14 |
jeff |
label on the library card would be something that would be removed before use. |
14:15 |
jeff |
and we might skip the label on the card but it's useful for ensuring that the cards don't get mixed up before people write their name on the card (if they choose to do that). |
14:16 |
jeff |
these will be distributed in school, multiple siblings may come home with a card on the same day... the removable label on the card itself is probably extra work but I think it might still be worth it. |
15:12 |
|
Keith-isl joined #evergreen |
15:21 |
Keith-isl |
Does anyone speak AngularJS and spine label templates that might be able to assess why my date formatting isn't showing up correctly in a string? |
15:22 |
Keith-isl |
{{col.c ? col.c['active_date'] : ''}} |
15:22 |
Keith-isl |
gives the super long, very granular date/time format |
15:22 |
Keith-isl |
And wanting MM/dd/yyyy |
15:23 |
Keith-isl |
And I know there's gotta be a: |
15:23 |
Keith-isl |
| date'MM/dd/yyyy' |
15:23 |
Keith-isl |
In there somewhere |
15:23 |
Keith-isl |
But I don't know where to put it without breaking things. |
15:30 |
Keith-isl |
Think I got it |
15:32 |
Keith-isl |
{{col.c ? (col.c['active_date'] | date:'MM/dd/yyyy') : ''}} for anyone digging this up in an archive seven years from now. :) |
15:38 |
csharp_ |
Keith-isl++ |
15:38 |
csharp_ |
rubberducks++ |
15:38 |
Keith-isl |
Quack quack |
15:40 |
mmorgan |
\_()< |
15:41 |
csharp_ |
🦆 |
15:57 |
Dyrcona |
csharp_++ |
16:09 |
|
jvwoolf joined #evergreen |
16:20 |
|
Keith_isl joined #evergreen |
17:09 |
|
mmorgan left #evergreen |
17:26 |
|
jvwoolf left #evergreen |
18:00 |
pinesol |
News from qatests: Testing Success <http://testing.evergreen-ils.org/~live> |
20:17 |
|
rfrasur joined #evergreen |