Time |
Nick |
Message |
00:34 |
|
bshum joined #evergreen |
04:31 |
pinesol |
News from qatests: Testing Success <http://testing.evergreen-ils.org/~live> |
07:02 |
|
agoben joined #evergreen |
08:03 |
miker |
mmorgan: re deleting users in batch from a bucket, that behavior is intended and documented (in the specs, at least). it's to allow rolling back a batch action, and to provide a level of accident protection. |
08:04 |
miker |
also Bmagic -^ |
08:04 |
miker |
so, not a bug |
08:04 |
|
Dyrcona joined #evergreen |
08:07 |
|
_bott_ joined #evergreen |
08:11 |
|
collum joined #evergreen |
08:39 |
|
mmorgan joined #evergreen |
08:59 |
csharp |
@decide bug or feature |
08:59 |
pinesol |
csharp: go with feature |
09:00 |
|
stephengwills joined #evergreen |
09:00 |
csharp |
miker: did you notice my findings over the weekend that reporter.old_super_simple_record is not (to my eyes) RDA-aware? |
09:00 |
csharp |
wondering if we could populate rmsr from reporter.simple_record instead? |
09:00 |
* csharp |
is sure there are reasons why stuff is the way it is :-) |
09:01 |
csharp |
in any case, the end result is that bucket and other displays are not showing publisher/pubdate data from 264 fields |
09:05 |
JBoyer |
There's no reason that the reasons things are the way they are can't be made into reasons they were the way they used to be, especially with display fields finally showing their faces. Lots of plumbing to do though. |
09:06 |
JBoyer |
There have been a lot of complaints about missing publishers in buckets here too though I haven't had time to look into it. |
09:06 |
agoben |
Z39.50 too |
09:07 |
|
aabbee joined #evergreen |
09:09 |
csharp |
good to know |
09:10 |
* Dyrcona |
finds it ironic that he's listening to Missing Persons' song "Words" as he read JBoyer's word salad. :) |
09:10 |
csharp |
ha! |
09:11 |
JBoyer |
Dyrcona++ |
09:11 |
Dyrcona |
The way things are isn't necessarily the way things ought to be. :) |
09:11 |
JBoyer |
I try to make a well rounded word salad. |
09:12 |
Dyrcona |
JBoyer++ |
09:12 |
Dyrcona |
I understood it. I just had to slow down... |
09:12 |
csharp |
this came to mind: https://www.reddit.com/r/quotes/comments/2gb7mp/dont_ever_for_any_reason_do_anything_to_anyone/ but yeah, I understood it too :-) |
09:12 |
Dyrcona |
And, now "Destination Unknown" starts... "Life is so strange..." |
09:14 |
Dyrcona |
heh: https://www.reddit.com/r/quotes/comments/2gb7mp/dont_ever_for_any_reason_do_anything_to_anyone/ckiqoxx |
09:14 |
Dyrcona |
Guess I choose decent theme music for this morning. :) |
09:16 |
Dyrcona |
On an unrelated note, I modified the postgres configuration on our training server recently, but I think I was too conservative with the shared memory. I used 32GB because I thought the server had only 128GB of RAM. Turns out it has 503GB, according to free. |
09:17 |
Dyrcona |
I only mention it because a metabib.reingest_record_attributes for item_lang has been running since last Friday night. I thought it would finish over the weekend. |
09:19 |
JBoyer |
Would shared mem have a larger effect on that than work_mem? |
09:20 |
Dyrcona |
JBoyer: Not sure, and I meant to say shared_buffers. |
09:21 |
Dyrcona |
work_mem is only 512MB. It is hard to find decent guidelines for work_mem. There's a lot of bad folklore out there. |
09:21 |
Dyrcona |
Most of the advice I've received is from sources that don't understand how work_mem is used. |
09:23 |
JBoyer |
Many (most?) of them have bad folklore surrounding them. :/ I do know you have to be careful playing with work_mem since it can be multiplied by the number of open connections, but other than that all I've seen is "experiment with your dataset" which is vague but at least accurate. |
09:23 |
Dyrcona |
Current song: Mental Hopscoth, seems appropriate. :) |
09:24 |
JBoyer |
I don't know that it would make a huge difference for a single reingest call multiplied by some small number though. |
09:25 |
Dyrcona |
Yeah, I don't think so, either. |
09:25 |
Dyrcona |
One thing most people overlook is this line from the documentation: Note that for a complex query, several sort or hash operations might be running in parallel; each operation will be allowed to use as much memory as this value specifies before it starts to write data into temporary files. |
09:26 |
Dyrcona |
That means, you can end up with greater than number of connections times work_mem being used. |
09:26 |
Dyrcona |
But, in my experience, it is usually less. |
09:28 |
JBoyer |
Oh, right. (possibly even worse with the parallel stuff in 9.6+) But you almost have to come up with a specially crafted query designed to use resources vs returning specific data. |
09:28 |
|
jvwoolf joined #evergreen |
09:33 |
|
mmorgan1 joined #evergreen |
09:57 |
Bmagic |
miker: That's the conclusion I was coming too. The release notes helped |
09:58 |
Bmagic |
However, I thought I saw somewhere... email list or IRC chat where someone wanted the mass delete user bucket function to also delete each account the same as it would one at a time (PURGED-XXXXXXX) |
10:19 |
Dyrcona |
Here's a question: Should deleted bib records show up in reporter.materialize_simple_record? |
10:20 |
|
Christineb joined #evergreen |
10:21 |
Dyrcona |
Looks like we have 1.08 million deleted bibs showing up in rmsr. |
10:22 |
JBoyer |
I think since you can eventually reach them through reports that it's a good idea. |
10:22 |
Dyrcona |
Ok. Looks like the answer is "yes" based on our training server. |
10:22 |
Dyrcona |
JBoyer++ |
10:22 |
Dyrcona |
This means I don't have to change the code I just wrote. :) |
10:24 |
JBoyer |
+1 to that |
10:25 |
csharp |
yeah, we do want deleted bibs - lots of catalogers want data for deleted recs |
10:30 |
Dyrcona |
Thanks. I'm using the single argument version of reporter.simple_record_update which updates rmsr for deleted records. |
10:31 |
|
yboston joined #evergreen |
10:31 |
Dyrcona |
I considered joining with biblio.record_entry and using the two argument version with the deleted flag as the second argument, but then thought I should ask before changing anything. :) |
10:33 |
miker |
csharp: I didn't see that. some work on display fields may be necessary (looks like JBoyer said the same?) |
10:36 |
miker |
Bmagic: I don't recall seeing that, but I don't doubt someone said they wanted that. I think there's a "purge old deleted users" script somewhere? anyway, I'm concervative when it comes to deletes that preclude fixing mistakes, esp. when it's a batch action on a list you can't reasonably review quickly (think: bucket with 1000s of users, collected by many staff) |
10:39 |
csharp |
miker: understood - I've put that on my to-do |
10:40 |
csharp |
for the channel - looking at bug 1812900 and wondering whether http://git.evergreen-ils.org/?p=Evergreen.git;a=blob;f=Open-ILS/web/js/ui/default/staff/cat/volcopy/app.js;h=20c825746ebf0138cb32639fc88304023b69295f;hb=HEAD#l1040 has been/is being replaced by DB-side workstation settings? |
10:40 |
pinesol |
Launchpad bug 1812900 in Evergreen "Print item labels on save & exit not sticky" [Undecided,New] https://launchpad.net/bugs/1812900 |
10:41 |
csharp |
also wondering what hatch.setItem/getItem are doing in a post-Hatch-for-storage world |
10:44 |
csharp |
ah - it looks like that call isn't really using Hatch? |
10:44 |
berick |
csharp: it talks to the server |
10:44 |
berick |
so, you can keep using hatch.set/getItem |
10:45 |
berick |
in Angular, we have StorageService and ServerStorageService |
10:45 |
csharp |
I see |
10:47 |
csharp |
so it looks like the "non-sticky" setting should be present in the cat.copy.defaults workstation setting |
10:55 |
|
nfBurton joined #evergreen |
10:56 |
csharp |
I've never used the Holdings Defaults screen - is the intention that when I click on a checkbox it's saved on the server in that instant? I don't see a "Save" button anywhere |
10:57 |
csharp |
ah - yes, I see that it did save |
10:58 |
csharp |
hmm - I'm wondering if this might be cache vs. Hatch vs. 3.0 data/storage vs. 3.2 data/storage |
11:02 |
|
gsams joined #evergreen |
11:05 |
berick |
3.2 storage only comes into play on the ACQ admin screens for now |
11:16 |
|
jvwoolf joined #evergreen |
11:21 |
csharp |
berick: I guess what I mean is server-side storage |
11:34 |
* berick |
nods |
11:47 |
|
sandbergja joined #evergreen |
11:57 |
|
nfburton joined #evergreen |
12:43 |
|
dpearl joined #evergreen |
13:06 |
Dyrcona |
csharp | JBoyer: Lp 1811696 |
13:06 |
pinesol |
Launchpad bug 1811696 in Evergreen "Add option to rebuild reporter.materialized_simple_record to pingest.pl" [Wishlist,New] https://launchpad.net/bugs/1811696 |
13:07 |
Dyrcona |
Pointing it out because I added branch(es) today. |
13:07 |
csharp |
Dyrcona++ |
13:07 |
Dyrcona |
I should have waited before making comment #1, but oh well. |
13:09 |
csharp |
reporter.refresh_materialized_simple_record() does it all, btw |
13:11 |
csharp |
I also found that rmsr rebuilding needed to happen after running pingest.pl - lots of missing data before I re-did that |
13:11 |
csharp |
that is to say, I ran the refresh function post-upgrade, pre-reingest and needed to re-run the refresh after the reingest anyway :-/ |
13:13 |
Dyrcona |
Well, chatting privately with JBoyer earlier this morning, I suggested that we just mark the bug invalid and tell people to rebuild the table in the database. |
13:14 |
csharp |
it's not *that* big a deal (as long as it's clear that you need to do it ;-) |
13:16 |
Dyrcona |
You don't need to do it every upgrade, just when something that affects rmsr happens. |
13:18 |
csharp |
right |
13:19 |
Dyrcona |
Hopefully, that ends up in the release notes! ;) |
13:20 |
Dyrcona |
My inclination, now, is to set that bug to a status that hides it from general search, but I'll leave it alone. |
13:21 |
Dyrcona |
Someone whispered in my ear that there might be a bug in post 3.1 versions of metabib.reingest_record_attributes, so I'm going to look into that. |
13:23 |
Dyrcona |
Hmm. Apparently not... |
13:24 |
Dyrcona |
At least not what I suspected. Seems normal. |
13:32 |
csharp |
we haven't noticed any trouble since upgrading to 3.2 over the weekend |
13:33 |
Dyrcona |
Yeah. I'm going to try doing Lp 1813172 and testing that on a db upgraded from 3.0 to 3.2. |
13:33 |
pinesol |
Launchpad bug 1813172 in Evergreen "Add option to specify metabib record attributes for reingest to pingest.pl" [Wishlist,New] https://launchpad.net/bugs/1813172 - Assigned to Jason Stephenson (jstephenson) |
13:42 |
|
yboston joined #evergreen |
13:49 |
eby |
question if there is anyone that knows offhand. I think i'm not getting my keywords right. From the code and documentation it seems that shelf hold expire dates are moved ahead if they fall on a closed date but not if there is a closed date between the dates, at least in our experience. |
13:50 |
eby |
I looked for a bug about the feature and wasn't able to find any but maybe i'm searching wrong. I did find kmlussier ask years ago but with no answer http://irc.evergreen-ils.org/evergreen/2013-12-04#i_51793 |
13:50 |
eby |
I saw there was new overlap code with a api level of zero but i'm having trouble telling if it does this |
13:51 |
eby |
http://git.evergreen-ils.org/?p=Evergreen.git;a=blob;f=Open-ILS/src/perlmods/lib/OpenILS/Application/Storage/Publisher/actor.pm#l363 |
13:51 |
Bmagic |
csharp++ # 3.2 upgrade |
13:54 |
Dyrcona |
eby: due dates and shelf expire times are only adjusted if they fall on a closed date, not if there is a closed date between now and then. |
13:54 |
eby |
thanks Dyrcona that is what i wanted to confirm. i'll work on a launchpad feature request |
14:04 |
|
khuckins joined #evergreen |
14:16 |
Dyrcona |
eby++ # For unexpected synchronicity.... |
14:17 |
Dyrcona |
Something in the logs that eby shared is highly relevant to something that I've been thinking about lately. |
14:17 |
eby |
2013 must have been a good thought year |
14:20 |
Dyrcona |
Next gen. and the future and all of that. :) |
14:23 |
Dyrcona |
Hmm.. Back to that potential bug in metabib.reingest_record_attributes....My test run of just doing that on a 3.0 database with pingest.pl has already processed the first 8 batches. On a 3.2 database, it still hasn't finished the first batch. I started the latter first. |
14:26 |
Dyrcona |
I wonder if specifying the pattr_list attribute can really slow things down. That bears looking into. |
14:32 |
Dyrcona |
It's now up to 12 batches done on the 3.0 database... |
14:33 |
Dyrcona |
Could just be the luck of the draw, but the 3.2 database has now finished 5 batches. |
15:01 |
|
jihpringle joined #evergreen |
15:15 |
sandbergja |
We just had a mysterious few cases of holds that needed to be re-targeted. Our branch librarian is wondering about a report that would list any holds that might need to be re-targeted. |
15:16 |
sandbergja |
Does anybody have either an in-client or SQL way to get a list like that? |
15:16 |
jeff |
stepping back from SQL, how would you describe a "hold that needs to be re-targeted"? |
15:18 |
sandbergja |
jeff: hahaha you are asking a good question. The case that sparked this conversation was a handful of books that were on hold for a patron, they were checked in, they were the only copies of those books in the consortium, but wouldn't get captured to fill the holds |
15:19 |
sandbergja |
Retargeting the holds allowed those books to get captured and on to the patrons, but I don't have a sense for why those specific copies weren't considered targets in the first place |
15:21 |
berick |
sandbergja: were they newly cataloged books? |
15:21 |
jeff |
a common reason is that the copies were in an unsuitable state before check-in. |
15:22 |
jeff |
(though berick's is probably the even-more-common reason) |
15:22 |
jeff |
when you say they were "checked in", were they on loan to another patron prior to being checked in? |
15:22 |
* mmorgan1 |
was also thinking of what berick said. |
15:23 |
jeff |
i skipped over it because i read into sandbergja's description as indicating that the copies were checked out to another patron. on re-read, i realized that was an assumption on my part. :-) |
15:24 |
mmorgan |
Also bug 1686463 may be relevant |
15:24 |
pinesol |
Launchpad bug 1686463 in Evergreen "Wishlist: Background targeting of holds when items are edited into a holdable state" [Wishlist,Confirmed] https://launchpad.net/bugs/1686463 |
15:26 |
jeff |
apropos of nothing in the current conversation: if anyone has a record cleanup / authority control vendor (or non-vendor method!) that they care to share experience on, I'm interested in hearing your thoughts either here or via msg. |
15:26 |
sandbergja |
berick: they weren't newly cataloged, but the unsuitable state sounds likely |
15:27 |
jeff |
auditor.asset_copy_history (or _lifecycle) can be helpful there. |
15:27 |
sandbergja |
jeff: oh, good idea! i'll check there |
15:28 |
sandbergja |
And going back to the SQL side, what is the SQL version of a hold not having targeted a possible copy? |
15:29 |
sandbergja |
Is it that action.hold_request.current_copy is Null? |
15:30 |
berick |
sandbergja: there is a way to run the hold targeter to make it only retarget "broken" holds. (holds with no current copy, holds that target a defunct copy for whatever reason) |
15:30 |
sandbergja |
Oh, nice! |
15:30 |
sandbergja |
I'll take a look at that |
15:31 |
berick |
sandbergja: see the --soft-retarget-interval option via /openils/bin/hold_targeter.pl --help |
15:31 |
sandbergja |
berick++ |
15:31 |
sandbergja |
jeff++ |
15:35 |
Dyrcona |
jeff: We use Backstage Library Works, and they do a good job. |
15:36 |
Dyrcona |
I basically marc_export a file of new records every quarter and then process their output with this script: https://pastebin.com/V03gL1S9 |
15:38 |
sandbergja |
jeff: We've used Backstage as well, just for a one-time cleanup/authority record matching. They did a good job, and had a good knack for explaining things for the less cataloging-oriented folks in our consortium |
15:39 |
Dyrcona |
MVLC uses/used Backstage, too, at least while I was there. I assume that they still do, but don't know for certain. |
15:39 |
sandbergja |
We ran out of money for authority control, though, and we've been working to develop a good-enough way to do it ourselves |
15:50 |
sandbergja |
(mainly running SQL reports of headings that didn't match using metabib.browse_entry_def_map.authority = NULL, having catalogers look over the more suspicious headings, and then using this script to fetch those authority records and load them into Evergreen: https://github.com/sandbergja/dlc_authority_fetcher |
16:07 |
csharp |
jeff: +1 for backstage - we used them for a major bib cleanup in 2011 and have used them for authority update and upgrading bibs to RDA since then |
16:08 |
csharp |
jeff: if you want the gory details, Elaine Hardy is your best contact |
16:09 |
csharp |
but they were patient and professional |
16:13 |
jeff |
Dyrcona++ sandbergja++ csharp++ berick++ |
16:16 |
|
Dyrcona joined #evergreen |
16:21 |
|
seymour joined #evergreen |
16:31 |
pinesol |
News from qatests: Testing Success <http://testing.evergreen-ils.org/~live> |
17:18 |
|
mmorgan left #evergreen |
17:32 |
|
jvwoolf left #evergreen |
17:34 |
|
yboston joined #evergreen |
17:55 |
|
yboston joined #evergreen |