Evergreen ILS Website

IRC log for #evergreen, 2025-01-03

| Channels | #evergreen index | Today | | Search | Google Search | Plain-Text | summary | Join Webchat

All times shown according to the server's local time.

Time Nick Message
07:41 kworstell-isl joined #evergreen
08:29 mmorgan joined #evergreen
08:58 dguarrac joined #evergreen
09:52 redavis joined #evergreen
10:19 Dyrcona joined #evergreen
10:47 kworstell_isl joined #evergreen
10:59 sandbergja joined #evergreen
11:16 smayo joined #evergreen
12:01 jihpringle joined #evergreen
13:15 Bmagic berick: back to my Acq order load issue: I switched my test VM back to ejabberd and it imported fine
13:16 Bmagic So that sucks
13:19 Dyrcona Bmagic: What, exactly, did you do to test the ace order load? We might want to give it a try on our dev vm.
13:19 Bmagic I'm formulating a theory that it's Docker+redis that is introducing [something]. I couldn't find anything in the logs that helped me, but I think I'll run it again on Redis and capture the whole osrf log and post it. Before I do that, I wonder if you have any other logs that I should be looking at or perhaps changing settings for more verbocity?
13:21 Bmagic I have a marc file with 891 marc records taylored for a certain library's shelving locations/Circ lib/etc. And it loads the order fine, but the bib records aren't created nor matched. I ran through the exact* same steps with the same exact data on the same exact database, and success on ejabberd and fail on redis. A little more info: loaded the same file via MARC import (not acq) works fine.
13:22 Dyrcona Bmagic: Thanks. That sounds like something we could test.
13:23 Bmagic I'm not sure the MARC file matters all that much. When I cut the file down to smaller chunks, it worked on Redis. So there's another clue
13:24 Dyrcona Yeah. Number of records might matter.
13:26 Bmagic For a little while I was thinking that there was a record somewhere in the file that was tripping the code. Splitting and splitting and splitting the file down to 10 record chunks eventually started giving me successful loads. Once I got it down to 10, I figured the other 10 half would fail because the 20 record file with both halves failed, but my surprised, both halves worked
13:26 Dyrcona Is this with Evergreen 3.14 or a different version?
13:26 Bmagic 3.14
13:27 Bmagic which is an interesting point: The ticket was sent to us when the system was still on 3.13 with Redis. And in the interum, we've upgraded the system to 3.14. The version of Evergreen doesn't seem to make a difference here. I'm seeing the same issue on 3.14
13:28 Dyrcona OK. I
13:28 Dyrcona I'll ask our catalogers what they've tested in acquisitions.
13:28 Dyrcona I'm pretty sure someone was looking at acq the other day.
13:28 Bmagic Dyrcona++
13:49 Dyrcona I asked, and it looks like someone is trying it out right now.
13:50 Dyrcona I'm still watching the dev system with top.
13:50 Dyrcona It's a lot more stable since I made those settings changes. Also, systemd-journald is the largest memory consumer at the moment.
13:53 Dyrcona It is using close to 2GB of swap now.
13:58 Dyrcona Bmagic: Our head of cataloging reports no issues loading acquisitions files with 186 and 383 records respectively.
13:58 Dyrcona That's on our dev system.
13:59 Dyrcona I am also informed that we tell our members not to load large files through acquisitions anyway. Our staff don't think that they would normally load more than 100 records at a time.
14:01 jihpringle joined #evergreen
14:18 Dyrcona @egithball Does the world need another web server?
14:18 pinesol Dyrcona: Evergreen Command Center http://apod.nasa.gov/apod/image/1204​/EndeavourFlightDeck_cooper_1050.jpg
14:18 Dyrcona heh. if only I could type....
14:19 Dyrcona @eightball Does the world need another web server?
14:19 pinesol Dyrcona: Yes!
14:19 Dyrcona @eightball Is a V8 engine a good start for me?
14:19 pinesol Dyrcona: No clue.
14:29 berick Bmagic: ok, good to know.  i'll try some larger imports here and see if I can break anything
14:41 Bmagic Dyrcona: Docker is the key to making it fail I think
14:43 Bmagic And I realize Docker isn't officially supported by the Evergreen community so I think that leaves me "on my own"
15:01 kworstell_isl joined #evergreen
15:06 berick Bmagic: still though...
15:06 berick that's odd
15:07 Bmagic odd indeed. Docker has been pretty seemless/compariable, until now (it seems)
15:09 Bmagic the utility issue that I was tracking down went away once it was out of Docker. On my todo list is to see if I can get the Rust version running on Docker+Redis so that I can provide better diagnostics
15:11 berick cool.  when you do that, just note you cannot run both the C and Rust routers at the same time.
15:16 berick Bmagic: for the imports, there are no errors or crashes anywhere?
15:17 Bmagic nadda
15:17 Bmagic you'd think it would have an error somewhere. That's why I was asking if I needed to look in a different log, something for Redis specifically maybe
15:19 berick there is a redis log, and journalctl
15:20 berick i have a hunch redis itself is fine.  it's pretty battle tested.  more likely some other moving part.
15:22 Bmagic the clues are: size and docker. In both cases, it was large action triggers (at least in that case we got a SEGFAULT), and large MARC file in ACQ PO order load. One guess is that Redis is bumping up against some kind of docker artificial ceiling in terms of resources, and when it's denied, it silently breaks
15:23 Bmagic I had a problem several years ago having to do with ulimits. This feels similiar
15:24 Dyrcona Bmagic: I saw an open-ils.trigger drone using 3.3GB briefly yesterday. I think it was hanging around from Tuesday.
15:24 Dyrcona I'll happily blame Perl for these issues. :)
15:25 Bmagic maybe ejabberd did a better job of normalizing the bursts?
15:26 Bmagic probably the interplay with the OpenSRF code
15:26 Dyrcona You might be on to something. It could be that ejabberd spread the work out more. I haven't tried quantifying it, but it looks like fewer drones are used with Redis.
15:27 Dyrcona I have no data for that, but it would be worth figuring out how to test it.
15:32 Dyrcona I suppose we could test it with two comparable systems and then run things like the hold targeter and fine generator with the same settings on the same data.
15:37 berick i can promise higher/faster throughput for redis vs. ejabberd, even more so for larger messages
15:39 berick Bmagic: fwiw, imported 1k records via vandelay with no issue on a local non-docker vm.  not a 1-to-1 experiment, but still
15:40 Bmagic oddly enough, vandelay works, it's acq that fails
15:40 berick ah
15:40 Bmagic when importing the records and* making the PO at the same time, makes the bibs not load
15:45 berick Bmagic: PO -> Actions -> Load Bibs and Items?
15:47 berick oddly when I chose that i see no option to select a file to upload
15:49 Dyrcona berick: I wonder what effect that greater efficiency has on number of drones used. I could see it going either way.
16:02 Dyrcona There are 6 hold targeter drones running right now, and that's what I have it configured for.
16:06 Dyrcona Now that the hold targeter stopped, there are 5 of 30 hold targeter drones running.
16:06 Dyrcona I don't think that says very much by itself.
16:29 jeff I see this, and I think "we got both kinds: country AND western!": "We support authenticating user credentials via all of the most common protocols used by the most common ILS software used in the North American public library market."
16:55 Bmagic berick: Acquisitions -> Load MARC Order Records
16:56 berick oh, sheesh
16:56 berick thanks Bmagic
17:05 mmorgan left #evergreen
17:29 berick Bmagic: 1k recs into a new PO worked ok here.  bibs and line items.  records only, though, no copies
17:29 Bmagic that's pretty huge, just 891 for my test. But good to know. Though, I am loading copy info via vendor tag mapping

| Channels | #evergreen index | Today | | Search | Google Search | Plain-Text | summary | Join Webchat