Evergreen ILS Website

IRC log for #evergreen, 2021-11-02

| Channels | #evergreen index | Today | | Search | Google Search | Plain-Text | summary | Join Webchat

All times shown according to the server's local time.

Time Nick Message
06:01 pinesol News from qatests: Testing Success <http://testing.evergreen-ils.org/~live>
07:15 rjackson_isl_hom joined #evergreen
07:41 Dyrcona joined #evergreen
08:28 Dyrcona joined #evergreen
08:32 jvwoolf joined #evergreen
08:37 Dyrcona So, we had storage crash while running the fine generator again this morning. The logs look really weird. It looks like the listener crashed with "server: died with error Use of freed value in iteration at /usr/lib/x86_64-linux-gnu/perl/5.26/IO/Select.pm line 71."
08:38 Dyrcona Then it cleans up and restarts, but it is not capable of starting any drone processes.
08:38 Dyrcona I had to kill it with fire in order to restart the service.
08:39 Dyrcona I guess the logs don't look that weird after all, but it seems like something doesn't work when a listener tries to recover itself.
08:44 rfrasur joined #evergreen
08:47 mmorgan joined #evergreen
09:15 collum joined #evergreen
09:43 mantis joined #evergreen
12:04 jihpringle joined #evergreen
12:18 rjackson_isl_hom joined #evergreen
14:07 jvwoolf1 joined #evergreen
15:23 csharp_ having an issue where the open-ils.actor service is dropping offline with "We lost the last node in the class, responding with error and removing..." in the osrfwarn log
15:23 csharp_ it does not appear to be related to max children - I'm taking snapshots of osrf_control --diagnostic that don't show crazy drone counts
15:24 csharp_ I'm assuming that message happens *after* whatever it was that is killing the last node, so this is a devil to troubleshoot
15:25 Bmagic we're glued to the TV
15:25 csharp_ election day? or is something else going on?
15:26 Bmagic you're actor service
15:26 Bmagic @coffee csharp_
15:26 * pinesol brews and pours a cup of Colombia Huila Supremo, and sends it sliding down the bar to csharp_
15:26 csharp_ Bmagic: :-)
15:27 csharp_ it's hapening often enough that I have a cron running every 3 minutes that checks for a down service and restarts opensrf if so
15:27 jeffdavis I haven't seen that error before and it doesn't show in our logs for the past month.
15:28 csharp_ one factor may be that we just upgraded our app servers to 18.04
15:28 jeffdavis Of course we haven't had as much trouble with open-ils.actor lately either.
15:28 Bmagic wow, could it be a persistent staff member interacting with submitting a huge CSV of patron barcodes into a bucket?
15:28 csharp_ that occurred to me, but I don't know where I would look in the logs
15:29 jeffdavis Are you seeing "no children available" messages? I assume not...
15:30 csharp_ jeffdavis: not in relation to this issue, no
15:32 rfrasur (This is what happens when there are chores to be done.)
15:32 csharp_ rfrasur: definitely
15:34 rfrasur Also, my kids are old, and the only one that still lives with me DID sign for a thing from FedEx today, and WILL bring in the 40lbs of cat litter later (not cuz I can't, but because I really, really just don't want to).
15:35 Bmagic sounds 'bout right
15:35 csharp_ lazy 15-year-old is all we have right now - she gets away with few chores because managing her is harder than doing it myself
15:35 Bmagic yep! and they know exactly where that balance is
15:35 csharp_ lazy 18-year-old is at college and mostly out of the day-to-day picture
15:36 rfrasur Hmm, sounds like the dishes issue in our house.  It's cool though.  No weird creatures have emerged from the drain, so we're good right now.
15:36 csharp_ my wife went all librarian last night and put up signs everywhere
15:36 Bmagic lol!
15:36 Bmagic signs++
15:37 rfrasur lol!! Signs in the house?
15:38 rfrasur Bless her sweet, sweet, hopeful summer soul.
16:09 csharp_ 2021-11-02 15:17:49.659 [info] <0.14971.24>@ejabberd_c2s:process_terminated:271 (tcp|<0.14971.24>) Closing c2s session for opensrf@private.brick04-head.gapines.org/open-ils.ac​tor_listener_brick04-head.gapines.org_5539: Connection failed: timeout
16:09 csharp_ that corresponds with one of the failures
16:11 jvwoolf1 left #evergreen
16:13 csharp_ suspecting something is off with ejabberd config
16:14 jeffdavis csharp_: what are your c2s_shaper and s2s_shaper settings in ejabberd.yml?
16:15 jeffdavis Also I think I tried to copy over our old 16.04-era ejabberd.yml when we upgraded to 18.04 and had to change to the stock one (plus the usual EG changes)
16:16 jeffdavis see bug 1941653 re shaper settings
16:16 pinesol Launchpad bug 1941653 in OpenSRF "Update recommended shaper settings" [Medium,New] https://launchpad.net/bugs/1941653
16:20 csharp_ jeffdavis: normal and fast respectively - I'll try "none"
16:25 jihpringle joined #evergreen
16:31 mmorgan left #evergreen
16:39 csharp_ berick: I dreamed you and I were in a band together with Dyrcona - you were on the drums and I was playing lead guitar - in the dream I was wailing through an amp stack - I'm actually not really a soloist but it was cool in the dream
17:10 jeffdavis I hope you used pinesol to pick a band name.
17:10 csharp_ @band
17:10 pinesol csharp_: Sea of Green
17:11 csharp_ nope - that's a real band from berick's and my hometown :-)
17:11 csharp_ @band
17:11 pinesol csharp_: CHACHA20
17:11 csharp_ lol
17:12 csharp_ jeffdavis: our servers appear to like the shaper change - might just be lower load overall, but things feel less frantic
17:13 jeffdavis Nice! gmcharlt_ suggested it to us and it improved latency a bit
17:13 csharp_ pinesol: decide [band] or [band]
17:13 pinesol csharp_: go with Lacking Writeup
17:14 csharp_ gmcharlt_++
18:01 pinesol News from qatests: Testing Success <http://testing.evergreen-ils.org/~live>
19:54 jihpringle joined #evergreen

| Channels | #evergreen index | Today | | Search | Google Search | Plain-Text | summary | Join Webchat