Evergreen ILS Website

IRC log for #evergreen, 2023-09-18

| Channels | #evergreen index | Today | | Search | Google Search | Plain-Text | summary | Join Webchat

All times shown according to the server's local time.

Time Nick Message
04:15 phasefx joined #evergreen
06:50 collum joined #evergreen
07:36 kworstell-isl joined #evergreen
08:06 BDorsey joined #evergreen
08:49 Dyrcona joined #evergreen
09:10 sandbergja joined #evergreen
09:12 mmorgan joined #evergreen
09:36 dguarrac joined #evergreen
09:47 sandbergja joined #evergreen
09:54 sandbergja Bmagic: do you ever run into an issue with the docker containers where the autogen step just freezes indefinitely on `docker run`?  It's not consistent for me, but I'm wondering if it's something in my setup
09:55 sandbergja my setup == arm64 host, running podman instead of docker just to be difficult hahaha
09:55 Bmagic hmm, not on that step
09:55 Bmagic I've not tried podman
09:56 Bmagic can you see that your CPU is running high?
09:56 Bmagic I would imagine on ARM, you might expect it to be slower
09:56 sandbergja Podman's been mainly working as a drop-in replacement.  With this one exception. :-)
09:57 Bmagic in a situation like that. I'd start a container manually and run each of the ansible steps by hand, watching the output carefully
09:57 Bmagic instead of letting it build itself
09:58 mantis1 joined #evergreen
09:58 sandbergja CPU is very relaxed
09:58 Bmagic so it could* be waiting for user input
09:58 Bmagic and that's no good of course
09:59 sandbergja When I ran it manually, it said something like "Could not connect to the configuration server, going to sleep".  But none of the previous steps had failed, and osrf_control -l --diagnostic looked like everything was chugging along
10:00 sandbergja ...and ususally just deleting that container and running a new one is sufficient to fix it?
10:01 mmorgan1 joined #evergreen
10:02 Bmagic autogen ran no problem by hand?
10:03 Bmagic oh wait, what's the hostname of the container?
10:03 sandbergja Some random hash f0d0e65b9723
10:03 Bmagic if it starts with a number, you have to throw it away. OpenSRF doesn't work with hostnames that begin with a numeral
10:03 sandbergja Okay, that's probably it!
10:04 sandbergja f0d0e65b9723 is on a working one
10:04 Bmagic yeah, that's stung me before too. It didn't matter on older versions of Ubuntu because it wasn't enforced until 18.04 sometime
10:04 sandbergja So probably autogen is failing when I lose the lottery and get a hostname that starts with a number?
10:04 Bmagic yep
10:05 sandbergja Bmagic++
10:05 Bmagic that's why I've been having to pass the -h switch and set the hostname
10:05 Bmagic it's a yaml thing
10:06 sandbergja Nice!  I'll do that.
10:06 Bmagic pretty dumb, but that's how it works
10:07 sandbergja hey, that's a pretty easy fix.  Thanks, Bmagic.
10:07 Dyrcona Is there a Lp bug for OpenSRF not working with a hostname that begins with a number? Do we which component of OpenSRF causes the problem?
10:07 Bmagic easy fix, but it took me probably 5+ hours to figure it out
10:08 Dyrcona OpenSRF should work with any valid hostname.
10:08 sandbergja thanks for figuring it out so the rest of us don't have to!  That's definitely a very obscure one
10:08 Bmagic Dyrcona: it's more of a yaml/xml issue than an OpenSRF issue. I'd have to hunt it down to say  which piece it is
10:09 Bmagic but at the end of the day, OpenSRF can't work on a host with a hostname that starts with a number
10:16 mantis1 Is the email sent from basket actions in AT or within the code itself?  For some reason, our titles run together and repeat.
10:17 Bmagic I believe it's AT
10:20 Bmagic looking through the ATED table, I'm not so sure anymore
10:21 Bmagic "container.biblio_record_entry_bucket.csv" is as close as I can see but that one doesn't send email
10:21 mantis1 is it biblio.record_entry.email?
10:22 Bmagic I don't think so, I think that is the email button from the record page
10:22 mantis1 that's at least the generic name when I look it up on a comm server
10:28 mantis1 looking through templates-bootstrap/opac/record/email_preview.tt2
10:28 * mmorgan1 *believes* the biblio.record_entry.email trigger is for both baskets and individual bib records.
10:29 mantis1 mmorgan1: thinking the same at this point
10:29 Bmagic that might be right. I've had to track this down before but I don't remember what the conclusion was
10:30 mantis1 testing now
10:32 mantis1 yup that was it
10:32 mantis1 mmorgan1++
10:32 mantis1 Bmagic++
10:32 Bmagic matis++ mmorgan1++
10:32 mmorgan mantis1++
10:32 Bmagic mantis1++ even
10:32 mmorgan Bmagic++
10:35 Bmagic mantis1: are you going to be able to make it to the hack-away this year? I don't see your name on the roster. https://wiki.evergreen-ils.org/doku.php?​id=hack-a-way:hack-a-way-2023#attendees
10:35 mantis1 Not I unfortunately.  It's going to be in lieu of Jessica's maturnity leave and the hiring process for my position
10:35 Bmagic mmorgan++ # fixing the 1
10:36 Bmagic mantis1: that's too bad! Maybe next year. We're going to host it one of these years
10:36 mantis1 Bmagic++
10:37 mantis1 Here's hoping!
10:37 Dyrcona mantis1: Hiring process for you position? You're not leaving us are you? :(
10:37 Bmagic I was wondering that too
10:37 mantis1 Nope!  We're having an internal succession because Amy is retiring at the begining of the year
10:37 mantis1 Jessica is moving to member services and I'll be managing the system here
10:37 Bmagic mantis1++
10:37 Dyrcona Cool for you!
10:38 Dyrcona mantis1++
10:38 mantis1 We're hoping to get another person interested in being involved with the community!
10:38 Bmagic who wouldn't, I mean c'mon
10:38 Dyrcona I remember Amy mentioning her retirement at the conference. I didn't remember the dates, and thought she left already.
10:39 Dyrcona :)
10:39 mantis1 We had a loooong planning process lol
10:49 collum joined #evergreen
11:06 briank joined #evergreen
11:08 collum joined #evergreen
11:08 mmorgan mantis1++
11:08 mmorgan jvwoolf++
11:09 mmorgan Congrats!
11:23 Rogan joined #evergreen
11:41 Christineb joined #evergreen
11:43 collum joined #evergreen
12:02 collum joined #evergreen
12:04 kmlussier joined #evergreen
12:06 kmlussier mantis1: Is it possible your a/t def didn't get updated with the fix for bug 1057112?
12:06 pinesol Launchpad bug 1057112 in Evergreen 2.4 "biblio.record_entry.email action trigger concatenates title when multiple records are e-mailed" [Medium,Fix released] https://launchpad.net/bugs/1057112
12:07 jihpringle joined #evergreen
12:08 mantis1 kmlussier: yes!
12:08 kmlussier It's a very old one, but it sounds like the same behavior you were describing.
12:08 mantis1 It definitely was fixed after I copied the community template over
12:14 Dyrcona kmlussier++
12:14 Dyrcona mantis1++
12:23 jeffdavis I mentioned this on Friday afternoon, but I've run into an issue on 3.11 where ingest fails unless there is a constraint on metabib.browse_entry (sort_value, value), and we can't create that constraint due to bug 1695911
12:23 pinesol Launchpad bug 1695911 in Evergreen "Long browse entries cause index row size error" [Undecided,New] https://launchpad.net/bugs/1695911 - Assigned to Chris Sharp (chrissharp123)
12:25 Dyrcona jeffdavis: What version of PostgreSQL? This sounds familiar....
12:25 jeffdavis PG14
12:26 jeffdavis workarounds like indexing hashed values or using a unique index instead of a constraint don't work, the metabib.reingest_metabib_field_entries function in 3.11 seems to require the constraint specifically
12:27 jeffdavis I tried truncating the sort_value and value columns but I'm still getting an "index row size exceeds maximum" error when trying to recreate the constraint
12:30 Dyrcona Have you tried indexing on a substring of the column value? Is that what you meant by truncating the column values?
12:30 Dyrcona I recall something like this turning up recently, and I thought there was a db upgrade aimed at Pg 15 for this, but I'm not finding it.
12:34 collum joined #evergreen
12:34 mantis1 jeffdavis: same happened to me
12:35 mantis1 but I haven't looked at reingesting in a hot minute since this bug was reported
12:35 Dyrcona For some reason, I think the actual fix is modify the ingest function to only put 1024 or 2048 characters in the columns rather than limit the indexes.
12:36 Dyrcona I could also be confusing this with something else, but it's worth a shot.
12:36 jeffdavis I tried truncating the actual values, e.g. update metabib.browse_entry set sort_value = substring(sort_value, 1, 1000) where length(sort_value) > 1000
12:37 jeffdavis but still got an error when creating the constraint afterward. Which seems weird,  I dunno if I need to update some statistics or something?
12:38 Dyrcona What's the constraint?
12:39 Dyrcona Or, which constraint has the problem?
12:40 jeffdavis https://gist.github.com/jdavis-sitk​a/3e8ffb5eedae646db8702aa0d2ad0145
12:43 jeffdavis it seems weird that the error message says the index row size is 2712 when the two columns only have 1000 characters each
12:46 Dyrcona Well, the problem is the size of the index not the total of the data. The index could be larger than the data, but I don't know exactly how btree indexes are computed.
12:49 Dyrcona OK. I was thinking of 1303 which limits the size of the indexes on a couple of authority.full_rec indexes.
12:52 briank joined #evergreen
12:59 Dyrcona So, i have a question: Is the unique constraint really needed?
13:00 mantis1 Dyrcona; I'm still learning all this with bibs; is that part of the fingerprinting?
13:03 Dyrcona mantis1: This is for browse searching. Fingerprinting is a part of the larger process, but I don't think this constraint is used by finger  printing. I'm also not sure it is really needed.
13:03 jeffdavis without the constraint, metabib.reingest_metabib_field_entries fails on 3.11 with "ERROR:  there is no unique or exclusion constraint matching the ON CONFLICT specification"
13:04 Dyrcona Well, in that case it could be the 'ON CONFLICT' is the bug..... :)
13:05 Dyrcona So, I have the constraint on all of my databases without issue. I dropped it and recreated it on Pg 15. This means my data doesn't trigger the issue.
13:06 jeffdavis https://git.evergreen-ils.org/?p=Everg​reen.git;a=blob;f=Open-ILS/src/sql/Pg/​030.schema.metabib.sql;hb=HEAD#l1138 is the ON CONFLICT statement in question, any yes, changing that would avoid the issue
13:06 jeffdavis *and
13:07 Dyrcona I was not able to create a constraint using substring. The following gives me a syntax error: alter table metabib.browse_entry add constraint browse_entry_sort_value_value_key UNIQUE (subtstring(sort_value for 500), substring(value for 500));
13:07 Dyrcona oops wrong paste. I mean to past the one with substring typed correctly: alter table metabib.browse_entry add constraint browse_entry_sort_value_value_key UNIQUE (substring(sort_value for 500), substring(value for 500));
13:08 Dyrcona I can create a unique index, but that probably doesn't resolve the 'on conflict": create unique index browse_entry_sort_value_value_idx on metabib.browse_entry (substring(sort_value for 1024), substring(value for 1024));
13:09 Dyrcona Incidentally, using substring(... for 500) on my data leads to non-unique entries.
13:10 collum joined #evergreen
13:10 Dyrcona jeffdavis: I think the question now is "what's the actual purpose of that unique constraint? Do we still need it?" If not, dropping the "on conflict" and switching to an index may resolve things.
13:11 Dyrcona Unfortunately, I don't know the answer to the question of why the unique constraint exists, perhaps eeevil does?
13:12 jeffdavis We need to avoid duplicate entries in metabib.browse_entry to avoid duplicate entries when browsing the catalogue, no?
13:13 Dyrcona I guess that's it, yes.
13:14 Dyrcona Try making it an index. It might serve the role of a constraint, too. I'm not sure, and I haven't checked the documentation, but if you have a database to play around in, you could find out.
13:14 jeffdavis No, an index doesn't satisfy ON CONFLICT, it has to be a constraint.
13:14 Dyrcona OK. That's what I suspected....
13:14 collum joined #evergreen
13:18 Dyrcona jeffdavis: A constraint can be made using an index.
13:21 Dyrcona Which looks like it converts the index into a constraint. I'm going to try something.....
13:23 jihpringle joined #evergreen
13:23 Dyrcona Constraints apparently cannot contain expressions. When I try to convert an index using substring to a constraint, it errors out: Cannot create a primary key or unique constraint using such an index.
13:28 Dyrcona Even says so right in the documentation, "The index cannot have expression columns nor be a partial index."
13:30 Dyrcona I guess another options is to drop browse indexing/search. /me ducks.
13:32 jeffdavis I personally wouldn't mind but that won't happen before our scheduled 3.11 upgrade next month. :)
13:32 * eeevil reads up
13:33 Dyrcona jeffdavis: Are you upgrading from an ancient version? You should have that constraint already, but I guess if your data caused it to not get created, then you don't have ti.
13:33 Dyrcona it
13:33 jeffdavis We dropped it when we upgraded to Postgres 14 because we didn't have a solution for bug 1695911
13:33 pinesol Launchpad bug 1695911 in Evergreen "Long browse entries cause index row size error" [Undecided,New] https://launchpad.net/bugs/1695911 - Assigned to Chris Sharp (chrissharp123)
13:34 jeffdavis *dropped the constraint
13:35 Dyrcona You could try modifying the browse fields ingest function to only insert 1000 or so characters. I'd truncate metabib.browse_entry, add the constraint, modify the function, do a browse ingest, and see what happens.
13:36 Dyrcona I know you basically did this with your update, but something might have gotten missed.
13:36 Dyrcona I get non-unique values using 500, so I suggest 1000.
13:38 Dyrcona Also, maybe a vacuum analyze would help?
13:41 Dyrcona And, Pg 16 is out....
13:43 Dyrcona You could also compile PostgreSQL with a larger page size.... :/
13:46 jeffdavis I've tried truncating sort_value and value to 1000 characters, clearing out duplicate rows, and then doing vacuum analyze on metabib.browse_entry. If I try to add the constraint at that point, I get the "index row size exceeds maximum" error. If I don't add the constraint, metabib.reingest_metabib_field_entries gives the "no unique or exclusion constraint matching the ON CONFLICT specification" error.
13:46 jeffdavis Recompiling PG is not a viable option here :)
13:47 Dyrcona Yeah, I didn't think compiling Pg was really a viable option, but it is the only way I know of to change the page size.
13:48 eeevil I couldn't recall if I mentioned it on the original bug or just ("just") in here, but it looks like the latter.  one way around the issue would be to use a normalizer with a negative position (modifies the stored value, not just the index vector input) to truncate long fields. dbs' attempt -- and failure -- are because he tried to use it on preexisting data.  if we allow a flush of browse data and a browse reingest, it would work, modulo the chance
13:48 eeevil that some really long values won't be separately browse-able.
13:49 eeevil "it" in the first sentence being the remaining sentences :)
13:50 Dyrcona So, truncate the table, modify the browse ingest, add the constraint, run the browse ingest.... ?
13:52 eeevil that wouldn't have to be a configured normalizer, necessarily. it could just be a substr() directly in the ingest function when dealing with browse data. I recommend against pushing that down into the table definition, if we can avoid it, because that can make actually using the indexes harder -- we might have to match the functions in the index def to use them.
13:53 eeevil Dyrcona: as a practical matter of getting a fix in place, yes. more or less, having taken authority data into account
13:54 Dyrcona eeevil: I guess we order by sort_value somewhere, so using SHA* hashes is a bad idea?
13:54 eeevil Dyrcona: we do, in the browse logic.
13:57 Dyrcona jeffdavis: Do you want to take a stab at modifying the browse ingest, or would you like me to paste/share something?
13:58 jeffdavis If you've got something handy I'd be glad to have a look, otherwise I can poke at it.
14:01 eeevil hrm... I wonder if a GENERATED ALWAYS () STORED column could get us a digest /and/ be used in a unique constraint?
14:01 Dyrcona I don't have anything handy.
14:01 eeevil meeting time, but ...https://www.postgresql.org/docs​/14/ddl-generated-columns.html
14:01 Dyrcona eeevil: I was looking at that, but didn't get far enough into it to tell.
14:10 jeffdavis There's a metabib.browse_normalize function but it's only applied to value, not sort_value, so I think it's gonna have to be substring() directly in the browse portion of metabib.reingest_metabib_field_entries
14:15 jihpringle joined #evergreen
14:18 Dyrcona jefffdavis: Yes. I was thinking of going with substring(value for 1000) on both fields where "value" is whatever is appropriate.
14:19 Dyrcona jeffdavis: I'm going to try using a generation expression on the constraint. Looks like that might work.
14:21 Dyrcona Oh never mind. That only works for column constraints.
14:22 Dyrcona I might try it anyway. Worst thing that will happen is I'll get the syntax error that I expect.
14:26 Dyrcona OK. Also turns out that UNIQUE constraints also cannot be GENERATED. So that's two strikes against using a GENERATED constraint.
14:33 * Dyrcona checks to see what Pg version introduced generated columns/constraints.
14:35 Dyrcona Looks like Pg 10 has generated columns.
14:51 Stompro Is anyone using the "new" EDI order generator with Bakery & Taylor?  Did you need to adjust the Baker & Taylor default attribute set at all?
14:53 Stompro Does the vendor (B&T) know what attributes they want if I ask them?
14:55 jihpringle Stompro: maybe email the acq list?  based on the last time we talked about EDI Attributes in the acq interest group very few libraries/consortia have made the switch
14:56 Dyrcona We've switched but I don't know the details about the attributes. That would be someone else here who doesn't hang out in IRC.
14:57 Stompro Thanks, I think I have the Ruby translator installed on Debian 12, but I don't know how to test it out.  So I was looking at the new generator also.
14:58 Dyrcona We haven't used the Ruby code in a couple of years.
14:59 mantis1 We had a problem with the loader before
14:59 mantis1 turned out it was because we didn't upgrade Ruby
15:01 Stompro I'm not sure if it works with Ruby 3.1.2, which is what Debian 12 is using.
15:02 Stompro The test_client.pl seems to work.
15:03 Stompro I guess if I don't run the order pusher, and just the A/T I can see if the EDI message gets generated.
15:06 jihpringle joined #evergreen
15:10 Dyrcona We've had fun with that and new versions of Ruby in the past. I'll be glad to see it go.
16:11 Dyrcona I'm going to start another test run of action triggers on a vm with 32GB of RAM and 16 cores. (It has double the number of cores over production.) Should I double the parallel values for action trigger from 3 to 6 for collectors and reactors?
16:15 Dyrcona All right, I will double the values. We'll see how this goes tomorrow.
16:55 pinesol News from commits: Docs: DIG Reports Docs Update Project <https://git.evergreen-ils.org/?p=E​vergreen.git;a=commitdiff;h=95c08f​e464ce0b0686893ba19a87085758cfd5bf>
17:02 mantis1 left #evergreen
17:03 mmorgan left #evergreen
19:43 Rogan joined #evergreen
20:25 kworstell-isl joined #evergreen
21:35 kworstell-isl joined #evergreen

| Channels | #evergreen index | Today | | Search | Google Search | Plain-Text | summary | Join Webchat