Evergreen ILS Website

IRC log for #evergreen, 2025-01-15

| Channels | #evergreen index | Today | | Search | Google Search | Plain-Text | summary | Join Webchat

All times shown according to the server's local time.

Time Nick Message
00:15 Stompro joined #evergreen
02:07 Bmagic joined #evergreen
02:08 Jaysal joined #evergreen
02:08 dluch joined #evergreen
02:08 scottangel joined #evergreen
07:19 collum joined #evergreen
08:30 dguarrac joined #evergreen
08:33 mmorgan joined #evergreen
08:37 mantis1 joined #evergreen
08:58 Dyrcona joined #evergreen
09:50 Dyrcona I lowered the max requests for open-ils.trigger to 100 down from 1000 to try and save memory. It seems to be doing that.
09:51 Dyrcona However, since I have mad that change, we've had 3 instances of "stuck" action triggers (all 28 day mark item lost) that needed to be reset.
09:52 Dyrcona I have the emails going back to January of 2024, and it looks like we averaged 2 such emails a month, but the low was 0 for July and 6 for April. I can't be sure if this current uptick is really meaningful.
09:55 Dyrcona I should maybe add for clarity that this isn't the number of action triggers that needed to be reset, just the number of times we were notified about it. They're also not all the same event.
09:55 Dyrcona So, I'm just throwing that out there for no real reason.
10:43 Bmagic interesting
10:43 Bmagic Still using Redis?
10:45 Bmagic Along the Redis lines: I think I found a new reliable test. exporting bibs via CSV. On Redis, in a container, it dies pretty reliably with CSV's of more than 2k bibs. Which is nice, because it was looking like the only way to test the Redis issue was with high volumes of AT events, which means I'd need production data.
10:46 Dyrcona No, not using Redis. That's on the production utility vm with Ejabberd.
10:47 berick Bmagic: are things crashing or just not working?
10:48 Bmagic berick: I would like to answer that more thoroughly, but right now I don't have anything captured
10:48 Bmagic "not working" is what I have right now
10:48 berick k, it's a good start.  i'll see if I can reproduce
10:49 Dyrcona Bmagic: I'm not currently running cron jobs on the dev vm. It is also using Ejabberd, now. FWIW, switching to Ejabberd did not resolve the issues with acquisitions loading.
10:49 csharp_ @band add Trash Folder
10:49 pinesol csharp_: Band 'Trash Folder' added to list
10:49 Bmagic Dyrcona: interesting
10:50 Bmagic Dyrcona: I don't expect any issues with Redis outside of a container. berick: same for you, if you're testing the bib export idea on a VM, I don't expect an issue.
10:50 Dyrcona Also, FWIW, acquisitions does NOT have that issue on another machine running Redis with the same code, almost the same data. The big difference, the database is on a different machine. On the vm with the loading issues, the db runs on the same "hardware."
10:50 csharp_ berick: we're testing 3.14 + RediSRF and it's going well so far, but we did have an OOM kill for OpenSRF when someone was working with buckets apparently
10:50 berick Trash Folder's first song: I Gotta Bad Feeling About This
10:50 csharp_ I don't have much data about that yet, though
10:50 csharp_ berick++
10:51 Dyrcona csharp_: Adjusting some O/S settings and fiddling with max_requests can help.
10:51 berick Bmagic: ah, ok.  I may have to dust off the ol' docker
10:51 csharp_ Dyrcona: I'll take that under advisement
10:52 Dyrcona csharp_: Give me a sec and I'll past them again. You can search the logs and maybe find them in the meantime.
10:52 berick csharp_: i crave redis-related crash reports
10:52 csharp_ we definitely crank up requests really high because of the parallel request stuff
10:52 csharp_ berick: I'll see what the logs tell me when I get a mo
10:53 Bmagic berick: if you're interested in a "quick" redis docker build, give the "dev" container a whirl: https://hub.docker.com/r/m​obiusoffice/evergreen-ils
10:53 Dyrcona berick | Bmagic: I think a docker container makes some of these things happen more frequently, but I have seen memory-related stuff on the vm.
10:54 Bmagic berick++
10:54 berick Bmagic: was gonna ask.  i'll do that
10:56 Dyrcona csharp_: Making sure that the sysctl `vm.overcommit_memory=0` and setting `vm.vfs_cache_pressure=150` helped. The first means "overcommit memory when the algorithm thinks it's OK" and the latter mean "clear buffer/cache more frequently than the default." The latter can go to 200 if 150 is not enough.
10:57 Dyrcona I think 0 is supposed to be the default for vm.overcommit_memory, but it was 2 on our dev vm, which means "never overcommit memory." A setting of 1 means always overcommit memory.
11:06 csharp_ Dyrcona: thanks - looks like it's at 0 at the moment and the other is currently 100
11:08 Dyrcona csharp_: 100 is the default.
11:11 Dyrcona The vfs_cache_pressure controls the tendency of the kernel to reclaim ram used to cache inode and dentry entries. In a system with a lot of open files, increasing that number helps. It seems to help with PostgreSQL and Evergreen on the same machine.
11:12 Dyrcona Also, just having a swap file in use helps. I'd recommend 1/4 the amount of RAM for most cases.
11:13 Dyrcona left #evergreen
11:13 Dyrcona joined #evergreen
11:14 Dyrcona oof. typed Ctl+w in the wrong window.
11:29 Christineb joined #evergreen
11:34 Bmagic that's the "fun" way to close the window (a lesson I learned the hard way, lol)
12:03 jihpringle joined #evergreen
12:22 collum joined #evergreen
12:42 * Dyrcona contemplates "grouping" autorenew by usr.home_ou and supposes some combination of granularity and multiple, custom filter files will be needed.
12:53 Dyrcona Hmm. I guess if I really wanted to "average" things out by day, I should left join and do a coalesce to 0.
12:55 Dyrcona hm. No. It's not that simple. I'd have to completely reorganize the query. Finding things that aren't there is difficult.
12:59 Dyrcona I'm limited by the technology of my time.
13:04 * Dyrcona tries a couple of things.
13:22 collum joined #evergreen
13:22 jihpringle joined #evergreen
13:22 Christineb joined #evergreen
13:22 Dyrcona joined #evergreen
13:22 mantis1 joined #evergreen
13:22 mmorgan joined #evergreen
13:22 dguarrac joined #evergreen
13:22 scottangel joined #evergreen
13:22 dluch joined #evergreen
13:22 Jaysal joined #evergreen
13:22 Bmagic joined #evergreen
13:22 Stompro joined #evergreen
13:22 abneiman joined #evergreen
13:22 jonadab joined #evergreen
13:22 sleary joined #evergreen
13:22 book` joined #evergreen
13:22 briank joined #evergreen
13:22 abowling joined #evergreen
13:22 phasefx joined #evergreen
13:22 akilsdonk joined #evergreen
13:22 jweston joined #evergreen
13:22 JBoyer joined #evergreen
13:22 csharp_ joined #evergreen
13:22 degraafk joined #evergreen
13:22 pinesol joined #evergreen
13:22 jeff joined #evergreen
13:22 eeevil joined #evergreen
13:22 jmurray-isl joined #evergreen
13:22 eby joined #evergreen
13:22 jeffdavis joined #evergreen
13:22 bshum joined #evergreen
13:22 Rogan joined #evergreen
13:22 denials joined #evergreen
13:22 ejk joined #evergreen
13:22 gmcharlt joined #evergreen
13:25 Dyrcona Someone over in #postgresql suggested I try generate_series() to generate the dates for the "missing" data that I want to count as zero.
13:33 Dyrcona Hm... I'm still not sure that I can get what I want this way....
13:39 csharp_ Dyrcona: super cool though
13:39 Dyrcona yeah... It still can't count rows without a target -> circ.home_ou -> org_unit.
13:40 Dyrcona I'm going to let it run and see what I get. I suspect a bunch <nulll>, <date>, 0 rows.
13:40 csharp_ select generate_series('2020-10-31'::timestamp, '2021-10-31'::timestamp, '1 day');
13:41 berick huh, 1.79G for one of the docker layers?
13:41 Dyrcona csharp_ I get almost the same results as my previous query, so it's not filling in missing rows.
13:42 Dyrcona berick: Yeah, I've seen that and then Google's cloud agent or whatever can use a lot, too.
13:42 csharp_ OS?
13:44 Dyrcona csharp_: I was doing this FROM generate_series(now() - '6 months'::interval, now(), '1 day'::interval) as g(d)
13:44 csharp_ Dyrcona: interesting
13:45 csharp_ I like discovering features to play with like that
13:45 csharp_ @band add Table Bloat
13:45 pinesol csharp_: Band 'Table Bloat' added to list
13:45 Dyrcona You can use set-returning functions like tables. We do it in Evergreen quite a lot with actor.org_unit_descendants and the like.
13:48 Dyrcona Hm... I wonder if I use a CTE of a distinct set of acirc.home_ou from the autorenew events.... ("Make it slow.")
13:50 * csharp_ heard "Make it slow" in Barry White's voice for some reason
13:51 Dyrcona That's as good as any voice. The StarCraft voice actor sounds like Barry White with a Russian accent. :)
13:51 csharp_ :-)
13:51 Dyrcona I think it's a play on ST:TNG.
13:52 csharp_ as in "Make it so!"?
13:52 Dyrcona Yeah. There are a lot of punny references to pop culture in Blizzard games.
13:53 Dyrcona The Battle Cruiser commander will sometimes say, "Make it slow." When you give it an order to move.
13:53 csharp_ ahh
13:54 Dyrcona i think I'm chasing a fool's errand with this query... but I'll give this another shot with a cte.
13:56 Dyrcona Right. I do want the cartesian product of my CTE "homes" and the generate_series(), so I should be able to join with a condition.
13:56 Dyrcona err, withOUT a condition.
13:59 Dyrcona nope. I keep getting syntax errors
14:01 Dyrcona I wonder... If I make a CTE of the counted data from action_trigger.event, then a CTE or temp table or something with the ous, and join that with the generate_series....
14:04 Dyrcona Maybe I'll just drop it. The original query is probably accurate enough if misleading. I have a location that has 2 events in the last 6 months, so its average of 1 per day makes it look like they've had more.
14:04 Dyrcona It's just that when they do have an event, they only have 1.
14:05 Dyrcona And, if I did average it out, it would round to zero.
14:07 Dyrcona And, for this location, that's probably OK, too, since they are no longer a member and the one patron affected by these events needs their account updated to their new home library. :)
14:11 Dyrcona I am concerned that for my smaller locations that may not have an autorenewal every day, my current tally inflates their numbers.
14:12 Dyrcona @decide accuracy or s'good enough
14:12 pinesol Dyrcona: go with s'good enough
14:23 frank_g joined #evergreen
14:28 frank_g Hi all, I am trying to add a new field to one of our reports in EG, but it is a field of the marc record, It is possible? The report is about copies, (Barcode,Fingerprint,Copy,Status,Circulation, (ADD the 541 value TAG of MARC record))
14:34 Dyrcona frank_g: You can try joing on metab.real_full_rec where tag = '541'
14:35 Dyrcona Are you doing this in the reporter?
14:38 frank_g I am doing it using the reporter interface, but I don´t know what Core Source and Source Path select
14:40 Dyrcona Huh. It's not in the IDL, so you can't use it in the reporter.
14:40 * Dyrcona did not know that.
14:42 Dyrcona OK. you can use metabib.full_rec. It should show up in the reporter as "Flattend MARC Fields". You'll need to join through the call number to the bib to that table.
14:52 Dyrcona mantis1++
14:53 jihpringle60 joined #evergreen
15:01 Dyrcona I wonder if I can guesstimate how many autorenew events we process per hour?
15:08 mantis1 Dyrcona: do you do those hourly?  We just do it once a day.
15:09 Dyrcona We do them once per day, but we have problems with them and they seem to cause problems for other action triggers, so I am looking into splitting them up and running them throughout the day.
15:10 Dyrcona We average 13,717 autorenewals per day.
15:10 Dyrcona At least over the last 6 months we did.
15:11 mantis1 ah ok
15:11 mantis1 I know our preoverdues are split throughout the day
15:12 Dyrcona We combined preoverdues and autorenewas.
15:12 Dyrcona It's the autorenew process, but the message template has language that serves both purposes, so we don't run a separate pre-overdue.
15:14 Dyrcona I'm curious now what the max number of autorenews we've hand in the last 6 months is.
15:14 Dyrcona s/hand/had/
15:17 Dyrcona September 1, 2024: 38,644. Honestly, I thought it would have been higher.
15:27 mantis1 left #evergreen
16:02 berick bleh, forgot docker within lxd does not play well.  starting over w/ qemu/kvm
16:05 Bmagic Docker in docker?
16:06 Dyrcona You can run vms in vms, but I wouldn't recommend it....
16:07 Dyrcona I have an Ubuntu Desktop image that I run in lxd when I want to test the Evergreen client with a different timezone or whatever.
16:14 csharp_ make sure to upgrade rsync everybody! https://learn.cisecurity.org/webmail/799​323/2363710855/710d456a01842242223c665ef​ab6fe7e542b968b56c3fa24c95977779ba85770
16:14 csharp_ Ubuntu version: https://ubuntu.com/security/CVE-2024-12084
16:14 pinesol The LearnDash LMS plugin for WordPress is vulnerable to Sensitive Information Exposure in all versions up to, and including, 4.10.2 via API. This makes it possible for unauthenticated attackers to obtain access to quiz questions. (http://cve.mitre.org/cgi-bin/c​vename.cgi?name=CVE-2024-1208)
16:14 csharp_ uhhh
16:15 csharp_ pinesol: bad
16:15 pinesol csharp_: Must be because I had the flu for Christmas.
16:15 Bmagic csharp_++
16:15 berick teehee, access to quiz questions
16:15 csharp_ pinesol: you have to read the whole CVE number, dummy
16:15 pinesol csharp_: https://www.monsterfro.com/images/seniorpic2.gif
16:15 csharp_ Bmagic++
16:16 csharp_ berick: I wonder which lame tattletale reported that bug?
16:16 csharp_ @blame snitches
16:16 pinesol csharp_: snitches WILL PERISH UNDER MAXIMUM DELETION! DELETE. DELETE. DELETE!
16:17 csharp_ @blame add GET A LIFE, $who!
16:17 pinesol csharp_: The operation succeeded.  Blame #35 added.
17:13 mmorgan left #evergreen
17:42 jihpringle joined #evergreen
18:46 jihpringle89 joined #evergreen

| Channels | #evergreen index | Today | | Search | Google Search | Plain-Text | summary | Join Webchat