Evergreen ILS Website

IRC log for #evergreen, 2020-12-09

| Channels | #evergreen index | Today | | Search | Google Search | Plain-Text | summary | Join Webchat

All times shown according to the server's local time.

Time Nick Message
00:28 laurie joined #evergreen
00:54 abowling joined #evergreen
01:36 mrisher_ joined #evergreen
06:01 pinesol News from qatests: Testing Success <http://testing.evergreen-ils.org/~live>
07:27 Dyrcona joined #evergreen
08:11 tlittle joined #evergreen
08:14 rfrasur joined #evergreen
08:26 mmorgan joined #evergreen
08:27 alynn26 joined #evergreen
08:31 mantis2 joined #evergreen
09:22 Dyrcona Patches and sed scripts and diffs! Oh my!
09:38 Dyrcona Speaking of.... I found this old sed script in my patches directory, and it adds "export PERL_HASH_SEED=0" to /etc/apache2/envvars. I wonder if that is still advisable? miker?
09:43 jvwoolf joined #evergreen
09:44 JBoyer Seems like the sort of thing that will eventually just go away, was there any point to it beyond conveniently ordering things in the logs?
09:47 dbwells I believe when Perl added additional hash randomization, some of our queries with a lot of joins started scrambling in ways which put important tables outside the reach of the query planner.  That was an early attempt at making the join order consistent again, but I *think* something better came along to get the joins to follow the order the code provides.
09:50 dbwells See: https://bugs.launchpad.net/evergreen/+bug/1527731
09:50 pinesol Launchpad bug 1527731 in Evergreen "Query for OPAC copies can be extremely slow" [Medium,Fix released]
09:52 Dyrcona JBoyer++ dbwells++
09:53 Dyrcona Thanks for the memory jog. I guess I can delete that one.
09:59 dbwells I am a little confused looking at that bug what happened to the OpenSRF bits of it, as that "export" was added to the docs in one of those commits.  Maybe they came and went, or maybe they never came.  Hmm.
10:01 Dyrcona Well, we don't seem to be having that problem any more, so maybe the OpenSRF commits weren't necessary?
10:02 dbwells I guess if you read it as the second OpenSRF branch replacing the first, it makes sense that way.  The second branch just targeted autogen.sh directly.  There were a few other side effects outside the join order, apparently.  At any rate, nobody has noticed in a really long time, so we're probably good :)
10:03 JBoyer dbwells++
11:00 jihpringle joined #evergreen
11:13 Cocopuff2018 joined #evergreen
11:37 dbwells joined #evergreen
12:00 sandbergja joined #evergreen
12:39 abowling joined #evergreen
13:01 nfBurton joined #evergreen
13:39 nfBurton Looking at Bug LP1482757 which is exciting it is getting recent movement! What is the trigger I need to lift to permanently delete items that are in my system instead of marking as deleted? I looked at the triggers list but may be missing it?
13:39 nfBurton #LP1482757
13:40 nfBurton joined #evergreen
13:40 jeff bug 1482757
13:40 pinesol Launchpad bug 1482757 in Evergreen 3.5 "Loading records with located URIs should not delete and recreate call_numbers" [Low,Confirmed] https://launchpad.net/bugs/1482757
13:40 nfBurton Oh. Just formatting it wrong eh
13:41 nfBurton I tried -_-
13:49 * miker reads up
13:52 miker Dyrcona: there were some json_query calls that, in older versions on newer perls, would reorder the join tree and cause bad/weird queries, and that would help there. in modern versions (since we added support for ordered join trees via arrays instead of hashes/objects) it should be fine to ignore that. (there's a related issue with the autogen cache seed stability, but that doesn't have anything to do with the apache envvars, I don't think)
13:58 khuckins joined #evergreen
14:02 Dyrcona nfBurton: You want to permanently delete bibs, call numbers, both?
14:06 Dyrcona The short answer is multiple triggers need to be disabled.
14:07 nfBurton both. I assume they have similar names?
14:07 mmorgan nfBurton: The asset tables have rules (protect_cn_delete, protect_copy_delete, etc.) that set deleted flag instead of actually deleting rows.
14:08 Dyrcona mmorgan++ You also to disable rules.
14:09 Dyrcona I wrote something years ago to remove bibs that never had copies or URIs attached for someone else. I'll see if I've shared it online somewhere and paste the link.
14:09 mmorgan But I believe you can actually delete rows with direct database queries.
14:11 rhamby nfBurton if you look in the migration tools repo there are scripts for removing whole org units, the appropriate ones for bibs/call numbers will show how to disable/re-enable the appropriate triggers and constraints
14:12 rhamby you could crib pretty heavily from those tbh
14:13 Dyrcona rhamby++
14:13 Dyrcona I was just about to suggest looking at the migration scripts repo because I haven't shared that SQL online.
14:14 Dyrcona https://github.com/EquinoxOpenLi​braryInitiative/migration-tools
14:15 mmorgan nfBurton: FWIW, bug 1482757 does not change any triggers or rules, it just drastically reduces the number of times delete calls are made on  asset.call_numbers
14:15 pinesol Launchpad bug 1482757 in Evergreen 3.5 "Loading records with located URIs should not delete and recreate call_numbers" [Low,Confirmed] https://launchpad.net/bugs/1482757
15:18 sandbergja joined #evergreen
15:20 jihpringle joined #evergreen
16:06 mantis2 left #evergreen
16:10 nfBurton joined #evergreen
16:10 jihpringle joined #evergreen
16:10 nfBurton rhamby++ mmorgan++ Dyrcona++
16:11 nfBurton Thanks. I'll look further. I'm trying to clean up since the bug fix isn't pushed yet. I'm ending up with obscene amounts of call numbers especially in the database now because it adds instead of updating
16:23 jeffdavis I'm seeing this in ejabberd logs a fair bit: "More than 80% of OS memory is allocated, starting OOM watchdog"
16:24 jeff is that much OS memory allocated?
16:25 Dyrcona I've seen ejabberd use lots of RAM. I've seen it killed by the OOM KIller in the past, too.
16:28 Dyrcona Just glancing at my bricks, beam.smp is using the most of memory of all the processes.
16:29 jeffdavis I'm not sure what counts as "allocated" memory
16:32 Dyrcona jeffdavis: Run top. It sorts by % of CPU by default. If you hit the < it will move the sort column to the left 1 for each time you hit it. On Ubuntu, hitting it 4 times will sort by virtual memory used.
16:33 Dyrcona Three times goes by resident (i.e. actual memory) used.
16:33 Dyrcona beam.smp is currently using 3.7% of the memory on the head server of our first brick, for example.
16:33 mrisher joined #evergreen
16:35 jeffdavis utilities like top generally show roughly 50% of memory used, 25% buffer/cache, the remainder free. beam is sitting around 3%
16:35 Dyrcona On some of my other bricks, apache is using more of the RAM.
16:37 jeffdavis "committed" memory appears to be much higher though, exceeding available RAM in fact
16:37 Dyrcona Yeah, that's "normal." On my ARM laptop it typically shows something like 500% of memory committed.
16:38 Dyrcona I'm curious why OOM messages appear in the ejabberd log, though. Now that I think about it, they normally go to kern.log.
16:40 jeff during the warning (before erlang logs the all-clear), you might find the output of free(1) useful. "free -h" will give you human-readable units, and you'd be interested in the "-/+ buffers/cache" row.
16:40 Dyrcona Well, OK. On my ARM laptop a process called baloo_file is using 256.6G of virtual memory. With RAM + SD, the machine has a total of 68G available.
16:43 jeffdavis Hm, looking at some graphs, it looks like all not-used memory does edge over 80% sometimes.
16:43 jeffdavis I'm unsure if I should tinker with ejabberd config, also not clear if this "OOM watchdog" is killing ejabberd procs/OpenSRF drones
16:43 jeff ejabberd's 80% warning message relies on an erlang module that pays attention to /proc/meminfo if available, from what I can tell.
16:44 jeff the ejabberd watchdog won't kill things unless you've configured oom_killer in your ejabberd config, which you probably have not.
16:44 jeff it's a warning that the kernel OOM killer may kill things, and the watchdog supposedly logs extra stuff if that happens.
16:47 jeff (or perhaps "before that happens", not sure)
16:49 jeff oh. some versions of free may not give you the "-/+ buffers/cache" row. huh.
16:50 Dyrcona This *might* be useful: https://access.redhat.com/documentation/​en-us/red_hat_enterprise_linux/6/html/pe​rformance_tuning_guide/s-memory-captun
16:50 jeff I'm pretty sure based on reading src/ejabberd_system_monitor.erl that ejabberd won't ever kill anything other than an erlang process, even if oom_killer is enabled in the ejabberd config.
16:51 jeff But if you are getting short on memory the kernel OOM killer might be killing things. That's logged elsewhere, usually apparent in dmesg and /var/log/kern.log on default Debian/Ubuntu logging setups.
16:51 Dyrcona I recall setting overcommit_memory to 2 in the past.
16:54 Dyrcona Also, for reference: https://www.kernel.org/doc/html/​latest/admin-guide/mm/index.html
16:57 jeffdavis I've been keeping an eye on dmesg output but no OOM events lately
16:57 jeffdavis Thank you both!
16:58 sandbergja joined #evergreen
17:10 mmorgan left #evergreen
18:01 pinesol News from qatests: Testing Success <http://testing.evergreen-ils.org/~live>
18:37 jihpringle joined #evergreen
18:53 sandbergja joined #evergreen
19:46 Cocopuff2018 joined #evergreen
20:14 JBoyer joined #evergreen
21:51 sandbergja joined #evergreen

| Channels | #evergreen index | Today | | Search | Google Search | Plain-Text | summary | Join Webchat