| 04:54 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
| 05:09 |
|
berickm joined #evergreen |
| 07:38 |
bshum |
@later tell eeevil Hmm, wondering on the performance speed of record_attr_flat when there ends up being lots of uncontrolled_record_attr_value to look for. I'm getting truly terrible query performance on things like "SELECT COUNT(*) FROM metabib.record_attr_flat WHERE attr = 'oclc'" (our custom definition for tag 001) vs. looking for something that is in vlist like 'search_format' |
| 07:38 |
pinesol_green |
bshum: The operation succeeded. |
| 07:47 |
bshum |
My end goal was to figure out how many bibs still need to be reingested with our custom record attribute. Generally counting from uncontrolled, I can quickly find out, but I couldn't use that to narrow exact bib IDs of what remains. |
| 07:48 |
bshum |
We've had to do them in small spurts cause it has been taking terribly long to reingest the one field. |
| 09:28 |
* dbs |
overlooked the "k" in those measurements at first |
| 09:28 |
eeevil |
bshum: I don't want to sound flippant, but that's not a use case that the software itself cares about. we can't make every single query fast, there will always be ones that could be faster if we changed the shape of the data, but that would in turn slow down some other query. the question "how many records don't have this uncontrolled attribute, globally" is not something that needs to be fast, because it's not something we'd put into, say, a |
| 09:28 |
eeevil |
search result or a checkout api call |
| 09:30 |
eeevil |
bshum: that query (and ones like it) are at the opposite end of the spectrum from how that view is used by the software and how they're designed to be used |
| 09:36 |
eeevil |
bshum: now, if you just wanted to know the count, you could likely make things faster by restructuring your query to build an array of the ids of the values from uncontrolled_record_attr that are used as oclc number, and test record_attr_vlist with the overlaps operator. or, a different tactic is to just join record_attr_vlist on uncon_record_attr with (vlist @> intset(un_rec_attr.id) and un_rec_attr.attr=$oclc_attr_def_id) ... that would be |
| 09:36 |
eeevil |
doing what the view does for exactly the one attr |
| 09:44 |
eeevil |
think of it this way, everything that lives in the metabib schema has a very targetted purpose that is /not/ storing data. that schema is, by design, a dumping ground for specific denormalizations of central, authoritative data, and each table and view is intentionally shaped so that its primary purpose (search record attributes, backing tsearch queries, turn one record's attribute-int-arrays back into something a human can use) is as fast as |
| 09:44 |
eeevil |
possible. all other purposes (that is, "how X performs in arbitrary queries") are, at best, secondary concerns, because if we need some other purpose or use case to be fast we can trade a little more disk space for that speed |
| 09:45 |
eeevil |
and, the use case for record_attr_flat is "give me the attribute names and values for this record (or some small set of records)" |
| 09:46 |
* eeevil |
disappears in a puff of uncontrolled smoke |
| 10:45 |
bshum |
eeevil++ |
| 10:46 |
bshum |
Thanks for the rundown. I understand what you're saying here. And appreciate the suggestion on ways of answering the question via database approaches |
| 10:47 |
bshum |
I'll tinker more with it once we finish moving our servers. |
| 14:42 |
remingtron |
berick: thanks. we're hoping to start drafting docs for the web staff UIs as development continues |
| 14:43 |
remingtron |
any comments on which UIs are most "stable" at this point? |
| 14:43 |
remingtron |
or ones to wait on? |
| 14:43 |
berick |
remingtron: this is the best general source -> http://evergreen-ils.org/dokuwiki/doku.php?id=dev:browser_staff:dev_sprints:1 -- however, given that none of these have been thorougly tested, "stable" is a relative term |
| 14:44 |
berick |
anything marked through is feature complete within the realm of the current "sprint" -- that is, there are missing features, but they will be done later |
| 14:46 |
berick |
documenting basic stuff, like login, the splash page, the nav bar, etc. is a good start. then moving on to simpler interfaces -- ones that are less likely to change -- is a good next step |
| 14:47 |
berick |
having said that, though, I have a hard time thinking of tnese UIS will change *drastically* regardless of bugs or missing features |
| 14:47 |
berick |
since they all mimic the XUL client very closely |
| 14:48 |
berick |
hmm, smells like hurricane outside |
| 14:48 |
csharp |
berick: are you in the storm's path? |
| 14:49 |
remingtron |
berick: thanks for the advice, feel free to run and board up the windows if needed |
| 14:49 |
berick |
csharp: just the outer swirly bits. we'll get rain and modest wind. |
| 15:19 |
csharp |
my patch http://git.evergreen-ils.org/?p=evergreen/pines.git;a=blobdiff;f=Open-ILS/src/templates/opac/parts/record/copy_table.tt2;h=c0fbbd2c98b08ca26cd9537b42e21584e464dc80;hp=c29a0171f94c3f4e526dae5cfd651a65d5de9d1d;hb=af5b18ef988dfe2b21f536dd963737a030801899;hpb=c5ee6b43f566ea65b3361cc8a4056edc1c32bbd8 |
| 15:19 |
dbs |
So it could be a bug in RDFa Play |
| 15:19 |
csharp |
oops - dead link |
| 15:19 |
bshum |
dbs: My quick googling finds me the LP bug where we chatted about the testing tool and its quirks :) |
| 15:19 |
csharp |
http://git.evergreen-ils.org/?p=evergreen/pines.git;a=commit;h=af5b18ef988dfe2b21f536dd963737a030801899 - this works |
| 15:19 |
csharp |
(for the logs) |
| 15:20 |
dbs |
bshum: Well, there are different testing tools, each with their own quirks |
| 15:20 |
bshum |
No doubt. |
| 15:22 |
dbs |
yeah, nevermind. As weird as that markup is (really? a div for hold counts that wraps the copy table?) that's not the issue with RDFa Play |
| 15:22 |
dbs |
RDFa Play gets sulky about circular references and refuses to play in that case. And we have one of those in standard markup. |
| 16:29 |
jeff |
there may be permissions required and there's certainly at least one org unit setting that you'll want to verify for this use case -- reset request time on hold uncancellation or somesuch. |
| 16:30 |
|
tspindler left #evergreen |
| 16:50 |
|
ningalls joined #evergreen |
| 17:11 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
| 17:16 |
yboston |
@marc 024 |
| 17:16 |
pinesol_green |
yboston: A standard number or code published on an item which cannot be accommodated in another field (e.g., field 020 (International Standard Book Number), 022 (International Standard Serial Number) , and 027 (Standard Technical Report Number)). The type of standard number or code is identified in the first indicator position or in subfield $2 (Source of number or code). (Repeatable) [a,c,d,z,2,6,8] |
| 17:19 |
hopkinsju |
Thanks jeff++ |
| 09:34 |
|
yboston joined #evergreen |
| 09:37 |
|
kmlussier joined #evergreen |
| 09:56 |
|
krvmga joined #evergreen |
| 09:57 |
krvmga |
when a patron is logged into the opac, is there a limit on how many items will be displayed in the Current Items Checked Out screen? |
| 09:58 |
krvmga |
a patron has told me he can only see 15 when he has over 30 checked out. i just tested it (checked out 37) and see all 37 in one screen. |
| 10:01 |
jeff |
krvmga: holds are paged at 15 now, i believe. if you have old templates, you won't have the right paging logic and you'll only be able to see 15. |
| 10:02 |
jeff |
krvmga: checked out items might be using the patron's search result preference? dunno. i haven't encountered the issue you describe, and haven't taken a moment to look at the code just now. |
| 10:03 |
krvmga |
i did my test with a dummy user. search results set to 10 per page. |
| 10:04 |
kmlussier |
krvmga: Did you verify that this patron does indeed have 30+ items checked out? |
| 10:05 |
krvmga |
kmlussier: yes, he currently has 32 out |
| 10:06 |
jeff |
mosh++ |
| 10:07 |
|
tspindler left #evergreen |
| 10:09 |
kmlussier |
krvmga: If you can see 30+ on the current items checked out screen and another user can't, my first guess would be differences in the files on your brick heads. Or maybe the patron is looking at another screen, not the current items checked out screen. |
| 10:09 |
kmlussier |
For example, I think the "my lists" screen used to limit the user to 15 titles. Or maybe it was 10. |
| 10:12 |
krvmga |
kmlussier: i just tested the display from all five brickheads. the 37 items in my dummy account show up on all of them. |
| 10:12 |
krvmga |
kmlussier: i think it was 10 |
| 10:13 |
* jeff |
nods |
| 10:13 |
krvmga |
this is exactly what the patron wrote to me: |
| 10:13 |
krvmga |
I regularly have more than 30 items checked out at a time, as well as many requests. In order to navigate to the second page of my Checked Out items (items number 16 and above), I have to go to the second page of items On Hold; then the second page of my items Checked Out appears. But if I have over 30 items checked out, I cannot see more than the first 30; the final 6 of today's Checked Out items do not display. If after look |
| 11:01 |
|
krvmga joined #evergreen |
| 11:03 |
kmlussier |
krvmga: I think I may have found something. |
| 11:03 |
kmlussier |
krvmga: If you start off on the second page of holds, and then clicked "Checked Out" in the patron dashboard, the paging linnks will persist. |
| 11:08 |
krvmga |
yes, i see that. i had tested only from the tabbed interface. |
| 11:08 |
kmlussier |
That is, the paging limiters. |
| 11:08 |
kmlussier |
I'll see if I can push a fix for it this afternoon. |
| 11:48 |
|
jtaylor__ joined #evergreen |
| 17:08 |
|
mmorgan left #evergreen |
| 17:11 |
Bmagic |
kmlussier: I dont get that message on our production environment (2.6.1). I also do not see it on our 2.4.1 installation |
| 17:14 |
bshum |
Bmagic: I think kmlussier found the library setting she needed to enable in order to get the message to appear. For the logs, it is in the OPAC category, called "Warn patrons when adding to a temporary book list" |
| 17:15 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
| 17:15 |
Bmagic |
bshum: right on |
| 17:16 |
Bmagic |
I have a question as well. One of our catalogers has noticed a difference of behavior from 2.4.1 to 2.6.1. When cataloging a new marc record with fast item add, after the item details window opens, and you "Modify Copies" - the main window DOES NOT refresh and show the newly imported bib. |
| 17:22 |
Bmagic |
Looking at the code, I dont see any differences in volume_copy_editor.js or volume_editor.js between 2.4.1 and 2.6.1, any thoughts? |
| 01:15 |
|
zerick joined #evergreen |
| 04:09 |
|
gsams joined #evergreen |
| 04:16 |
|
berickm joined #evergreen |
| 04:45 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
| 07:11 |
|
collum joined #evergreen |
| 07:15 |
|
kbeswick joined #evergreen |
| 07:48 |
|
rjackson-isl joined #evergreen |
| 09:30 |
kmlussier |
eeevil: Following up on my question from yesterday, I have a record with 2 Located URI's: one for MBI and one for MCD. I set my preferred search library to MCD (not the system). I search the consortium and see both links as expected on https://jasondev.mvlcstaff.org/eg/opac/record/1554276, but MBI is still appearing first. |
| 09:30 |
kmlussier |
In this case, it should be showing the preferred library, MCD, right? |
| 09:30 |
kmlussier |
That is, it should be showing the preferred library first. |
| 09:33 |
eeevil |
kmlussier: that's what I would expect from the code in the repo, and what noble saw in testing ... let me see if we still have the test server up with that (though ALA has removed most of my human resources for things like knowing where an old test server might be ...;) ) |
| 09:33 |
|
ldw joined #evergreen |
| 09:34 |
kmlussier |
eeevil: OK, thanks. I could probably take a look on noble's server too. |
| 09:42 |
jeff |
in some ways, asset.copy rows are like tree rings. |
| 11:18 |
Bmagic |
As far as I can tell, our database has never had this function, oddly enough, it didnt seem to matter, all of the vandelay stuff worked. Looking back at the sql upgrade scripts, the last time it was introduced was 0738 around version 2.2.3 |
| 11:51 |
csharp |
Bmagic: right - I found that too - it didn't make it into any upgrade scripts (on the paths I've taken, anyway) |
| 11:51 |
Bmagic |
csharp: You don't have it in your production database either? |
| 11:51 |
csharp |
correct |
| 11:52 |
csharp |
but... we don't really use Vandelay at this point - we found it when testing acq record import |
| 11:53 |
Bmagic |
csharp: weird, the situation here is: we upgraded to postgres 9.2 using pg_dump evergreen instead of pg_dumpall, Vandelay was working just fine before the upgrade. Now, postgres is complaining about the 2 argument function not existing. Odd, but if I use pg_dumpall, it works fine...... puzzle anyone? |
| 11:53 |
csharp |
hmmm |
| 12:01 |
jboyer-home |
Bmagic: what are the flags you’re using for dump and dumpall? |
| 12:02 |
Bmagic |
pg_dump evergreen > db1_pgdump.sql |
| 12:03 |
Bmagic |
and that is what is in production now.... later, after finding that vandelay wasn't working. I used pg_dumpall -o > pgdumpall.sql |
| 12:04 |
Bmagic |
After testing the restore on a dev box from the dumpall, the vandelay is working (even without the 2 argument function) |
| 12:08 |
jboyer-home |
They were getting errors whether they tried to use match sets or not? |
| 12:10 |
hopkinsju |
jboyer-home: Yes, but the funny part is, if you dont' specify an import queue the import *does* work. It goes into a queue that gets labeled "-" |
| 12:15 |
jboyer-home |
Does this return anything on either system? select * from vandelay.queue where match_set is not null; |
| 15:03 |
jcamins |
jeff: I seem to recall the BOFH pioneering that feature. |
| 15:19 |
jeff |
assuming there's an account or two found, it makes it far less important to worry about getting the proper spelling of their name. |
| 15:34 |
|
akilsdonk_ joined #evergreen |
| 15:35 |
pinesol_green |
[evergreen|Kathy Lussier] Documentation for Located URI Visibility - <http://git.evergreen-ils.org/?p=Evergreen.git;a=commit;h=d5eb3a3> |
| 15:44 |
|
eeevil joined #evergreen |
| 15:44 |
|
Callender joined #evergreen |
| 16:04 |
|
tspindler left #evergreen |
| 16:57 |
|
tsbere_ joined #evergreen |
| 16:58 |
|
dreuther joined #evergreen |
| 17:10 |
|
mmorgan left #evergreen |
| 17:18 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
| 17:23 |
|
akilsdonk joined #evergreen |
| 17:25 |
|
jcamins_ joined #evergreen |
| 17:31 |
|
shadowsp1r joined #evergreen |
| 02:29 |
|
jeff_ joined #evergreen |
| 05:10 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
| 06:02 |
|
gmcharlt joined #evergreen |
| 06:54 |
|
Callender joined #evergreen |
| 07:36 |
|
collum joined #evergreen |
| 09:20 |
|
yboston joined #evergreen |
| 09:48 |
kmlussier |
Good morning all! |
| 09:51 |
jeff |
morning! |
| 09:53 |
kmlussier |
I'm curious abotu berick's code at bug 1308239. We had tinkered with the idea of using precats for ILL's from our statewide system, but in my testing, I found that Evergreen won't allow me to check out an existing precat. |
| 09:53 |
pinesol_green |
Launchpad bug 1308239 in Evergreen "Support targeting and fulfillment of precat copy holds (for ILL)" (affected: 1, heat: 6) [Wishlist,New] https://launchpad.net/bugs/1308239 |
| 09:53 |
kmlussier |
I'm wondering if that code would fix that issue. |
| 09:54 |
jeff |
I'm pretty sure we've had libraries that used precats for ILL, but I'm not certain of their workflow. They may not have been capturing and using the system for notification -- they might just have been calling the patron then checking out the item as a pre-cat (using the originating library's barcode, which is one of the reasons they moved away from that). |
| 09:56 |
|
krvmga joined #evergreen |
| 09:56 |
jeff |
I have interest in that feature / bugfix. I'm not grabbing the bug now, but will try to review when I can. |
| 09:56 |
kmlussier |
Our preference is to use precats, because the alternative is to use brief bib records that automatially deleted when the item is returned. But then it fills up the database with all of these deleted bib records. |
| 09:56 |
jeff |
kmlussier: That behavior is surprising, but I haven't tested it. |
| 09:57 |
krvmga |
we just upgraded to 2.5 and i'm getting complaints about the labeling of icons in search returns. i've been looking around but i can't see where to fix/alter the labels. anyone know off hand? |
| 09:57 |
jeff |
Yeah, we do that now with NCIP. It's not ideal compared with precats. |
| 09:57 |
jboyer-home |
I don’t know if ILL handling is consistent across Evergreen Indiana, but I know at my previous library they used the OCLC ILL number as the barcode, so there wasn’t a problem with re-using barcodes like that, then to keep things tidy the pre-cats are deleted later. |
| 13:53 |
jeff |
krvmga_: both both the ten and thirteen are there. |
| 13:54 |
krvmga_ |
jeff: since we started talking about it, one of our catalogers overlaid the record. :) |
| 13:54 |
krvmga_ |
she just told me. |
| 13:55 |
jeff |
the perils of testing theories on live systems. :-) |
| 14:00 |
|
DPearl1 joined #evergreen |
| 14:00 |
|
tspindler joined #evergreen |
| 14:01 |
|
krvmga_ joined #evergreen |
| 14:04 |
|
kbeswick_ joined #evergreen |
| 14:07 |
jeff |
Business::ISBN does the right thing and returns undef when you call as_isbn10 on a 979 isnb13 :-) |
| 14:13 |
jeff |
OpenILS::WWW::AddedContent::Amazon might fail on the 979 isbn13s, but probably not in a way that impacts anything else. |
| 14:19 |
|
bmills joined #evergreen |
| 14:23 |
|
tspindler left #evergreen |
| 14:23 |
|
tspindler joined #evergreen |
| 16:21 |
rangi |
the whole district has a pop of 30k .. its quite an amazing story really |
| 16:25 |
kmlussier |
I love seeing libraries that do so much to engage with their communities. Almost makes me wish I were working in a library again. Almost. |
| 16:32 |
|
tspindler left #evergreen |
| 16:53 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
| 17:04 |
phasefx2 |
bbl |
| 17:18 |
jeffdavis |
When upgrading a standalone db server from 2.4->2.6, it looks to me like there are no new dependencies that need to be installed. Can anyone confirm that? |
| 17:20 |
bshum |
jeffdavis: I haven't found anything specific yet. Though I think I did find that one of the upgrade scripts required an extra deb than before for me to run it on a fresh standalone DB. |
| 01:14 |
|
remingtron_ joined #evergreen |
| 01:14 |
|
dbwells_ joined #evergreen |
| 05:12 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
| 06:41 |
|
berickm joined #evergreen |
| 06:42 |
|
Callender joined #evergreen |
| 07:47 |
|
rjackson-isl joined #evergreen |
| 15:15 |
jboyer-home |
Bmagic: How many concurrent bibs is your parallel reingest running vs. the number of cores in your db? Depending on how hard it’s working, you may be pushing all of the necessary data out of cache and the db can’t keep up with searching and reingesting without the request timing out. We’ve had that problem during past upgrades. |
| 15:15 |
jboyer-home |
If you can eventually get results by performing the same search repeatedly that’s likely what’s going on. |
| 15:16 |
Bmagic |
jboyer-home: If we take the reingest out of the equation - the opac still does not return results. Which is what prompted me to do the ingest. I was thinking that perhaps I needed to reingest in order for the search results to start working. Is that right? |
| 15:18 |
jboyer-home |
Depends on how much of an upgrade you did and how much the search has changed. It’s possible that pre-reingest things can’t work, or it could also be that a cold start on your db takes a while to get runnning. (I think I have a 2.6 DB sitting around that hasn’t been reingested, but I don’t have any opensrf services pointing at it yet to test) |
| 15:24 |
|
gsams joined #evergreen |
| 15:26 |
|
vlewis joined #evergreen |
| 15:32 |
Bmagic |
jboyer-home: my current theory is the 9.1 -> 9.2 is the culprit - I am setting up another VM and this time I will leave it with 9.1 and upgrade to 2.6. Guess and check method |
| 15:50 |
bshum |
Hmm |
| 15:59 |
Bmagic |
bshum: I eliminated the old PG after the upgrade process |
| 16:00 |
Bmagic |
bshum: that could be part of the issue as well, because there are a ton of dependencies when removing 9.1.... and perhaps some needed by 9.2, so that is a little shakey to |
| 16:00 |
jboyer-home |
2 Minor data points: I finally got a 2.6 instance built and pointed at my 2.6 db which I’m certain hasn’t be reingested. It timed out 2-3 times (I assume no one has test db servers as nice as their prod machines) |
| 16:01 |
jboyer-home |
2nd data point: I’m going to squee like a schoolgirl about Ansible at the next Eg Intl conference. (Hint: a couple of hours ago there was no machine to even run 2.6 on) |
| 16:02 |
jboyer-home |
Re: searches: but it did eventually return results. (Had a concurrency error in my thought processes there) |
| 16:02 |
Bmagic |
jboyer-home: Are you saying that after the 3rd try, you got results? and yes, ansible is awesome |
| 16:02 |
jboyer-home |
3rd or 4th, yeah. |
| 16:03 |
Bmagic |
jboyer-home: See, I never got results, even after 15 tries... and waiting for the DB to stop processing... 20 minutes goes by, search again and nothing |
| 16:05 |
Bmagic |
bshum: I am now... some interesting things going on in there, but they finish after awhile... then I search the same search... no results |
| 16:06 |
Bmagic |
bshum: I do see my query getting to the DB, so I know for sure that my search is getting passed to my DB |
| 16:19 |
|
dconnor joined #evergreen |
| 16:22 |
Bmagic |
jboyer-home: bshum: tsbere: I am almost done upgrading this db to 2.6 from 2.4.1 - and then the search test again..... drumroll |
| 16:24 |
jboyer-home |
I am sick with jealousy. It takes close to overnight to go from 2.5.2 to 2.6.1 for us. (on an old antique server, but still) |
| 16:30 |
|
artunit joined #evergreen |
| 16:34 |
Bmagic |
jboyer-home: what is the results of du -sh /var/lib/postgresql/9.x/main for you? Ours is 61GB |
| 16:36 |
tsbere |
We are around 161GB as of Wednesday, dunno if it has gone up/down since then |
| 16:36 |
Bmagic |
1GB nics just dont cut it |
| 16:37 |
tsbere |
Our dump files, however, are only in the 8GB range. |
| 16:39 |
jboyer-home |
Our dumps take roughly 2-3 hours and are only 20-21GB (I don’t dump the auditor schema, we have 3 copies of the db on live hardware, the dumps are just for end of the world OH NOES and testing) |
| 16:39 |
jboyer-home |
It’s the test restoring that’s really rough. servers with 128GB of RAM take multiple days. :( |
| 16:43 |
Bmagic |
jboyer-home: I feel some of that pain over here. We are in a memory drought ourselves |
| 16:44 |
jboyer-home |
The real deals have 256GB, so it’s not as bad as it could be. |
| 16:45 |
Bmagic |
awwwwww, 30 minutes into 2.5.3 - 2.6.0 and this "Can't locate XML/LibXSLT.pm in" gotta install that and START AT THE BEGINNING |
| 16:45 |
jboyer-home |
Ouch. |
| 16:46 |
Bmagic |
But if that happened after 2 days, I would be really upset |
| 16:48 |
|
vlewis_ joined #evergreen |
| 16:48 |
jboyer-home |
I was curious and just checked, apparently a started a restore on our migration testing server last week. I know this because the COPY for asset.copy is still running with an xact_start of 6/13. D: |
| 16:51 |
|
ldw_ joined #evergreen |
| 16:54 |
jboyer-home |
Good luck, Bmagic! |
| 16:55 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
| 18:10 |
|
dbwells joined #evergreen |
| 18:28 |
|
mtcarlson_away joined #evergreen |
| 18:28 |
|
b_bonner joined #evergreen |
| 05:00 |
|
RAIDoperator joined #evergreen |
| 05:02 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
| 06:44 |
|
Callender joined #evergreen |
| 07:52 |
|
geoffsams joined #evergreen |
| 07:56 |
|
jboyer-home joined #evergreen |
| 12:43 |
jeff |
rsoulliere: have you tried with a small batch of records as input? |
| 12:44 |
Dyrcona |
yeah, I had tried to make that go away when it comes from standard input but it didn't work out. |
| 12:44 |
Dyrcona |
That message should have probably just been removed. |
| 12:46 |
rsoulliere |
Jeff: let me try a few tests on my end and I will report back. |
| 12:48 |
jeff |
fwiw, i don't think it will cause any issue, but i don't think marc_export requires that you pass your opensrf config file location if it's in the default location, and I don't think it knows about or respects a -c argument, just --config |
| 12:48 |
jeff |
i also don't think the previous version paid attention to a -c either, but i could be wrong there. |
| 12:48 |
jeff |
it's likely not causing any issues, just isn't doing anything. |
| 13:04 |
jcamins |
RoganH: yeah, I've done that. Not with passwords, because I never hit enter after typing in a password. |
| 13:05 |
RoganH |
I want one of those eye trackers that change your active window based on where you're looking. Tragically my library probably won't consider laziness an adequate disability to justify the accommodation. |
| 13:05 |
jcamins |
At least, I try not to. |
| 13:05 |
rsoulliere |
marc export testing update: I ran the marc_export script on a small list of ids and it worked as expected. Looking over the logs... one correlation is that if my text file had no records, the script imported all records. |
| 13:05 |
jcamins |
dbs: that's exactly why I don't hit enter after entering a password. |
| 13:06 |
dbs |
jcamins: thanks |
| 13:07 |
jcamins |
dbs: no judgement intended. |
| 13:26 |
Dyrcona |
--all isn't a default. |
| 13:26 |
Dyrcona |
If you give it no arguments or standard input, it sits there doing nothing. |
| 13:27 |
Dyrcona |
That's why it prints "waiting for input." |
| 13:27 |
dbs |
13:11 < rsoulliere> marc export testing update: I ran the marc_export script on a small list of ids and it worked as expected. Looking over the logs... one correlation is that if my text file had no records, the script imported all records. |
| 13:32 |
Dyrcona |
---all still isn't a default. It's because standard input was empty. |
| 13:32 |
Dyrcona |
Different bug. |
| 13:33 |
jeff |
i'm not sure that empty stdin would still export all -- but that's based on a quick read of the code earlier, so i'll defer to rsoulliere's empirical test results :-) |
| 13:33 |
Dyrcona |
The export code doesn't check Config::need_ids, so if the idlist is empty it looks like it will export everything. |
| 13:34 |
jeff |
aha |
| 13:35 |
hbrennan |
What does the Global checkbox mean/do in Circulation Limit Sets? |
| 13:43 |
dbs |
but I should have been explicit about that |
| 13:45 |
|
RoganH joined #evergreen |
| 13:54 |
|
RoganH joined #evergreen |
| 14:03 |
Dyrcona |
Well, it never got tested with an empty file or pipe, so same difference. |
| 14:49 |
|
sseng_ joined #evergreen |
| 14:49 |
|
ktomita joined #evergreen |
| 15:49 |
|
jeff joined #evergreen |
| 14:14 |
bshum |
gmcharlt: Springing this on you, but do you have any thoughts about 2.4 work? |
| 14:14 |
gmcharlt |
bshum: aiming for a release after ALA |
| 14:14 |
bshum |
#info gmcharlt aiming for an OpenSRF 2.4 release after ALA |
| 14:15 |
gmcharlt |
main thing I'd like at this point is more testing of the websocket work by berick |
| 14:15 |
bshum |
#info Get more testing of the websocket work started by berick, see https://bugs.launchpad.net/opensrf/+bug/1268619 and others |
| 14:15 |
pinesol_green |
Launchpad bug 1268619 in OpenSRF "WebSockets Gateway and JS Library" (affected: 1, heat: 6) [Undecided,New] |
| 14:16 |
bshum |
Sounds like a good thing. |
| 14:17 |
bshum |
Okay, we'll follow up on that after ALA, but in the meantime, folks should check out the bug and other announcements to help start testing the upcoming work for OpenSRF. |
| 14:18 |
bshum |
Thanks for the update gmcharlt++ |
| 14:18 |
bshum |
#topic Evergreen maintenance releases |
| 14:19 |
dbwells |
#info 2.5.5 and 2.6.1 are out |
| 14:19 |
bshum |
The next date on the calendar for that is 6/18, do we want to stick to that or perhaps shift things slightly? |
| 14:20 |
bshum |
dbwells++ # doing the release dance |
| 14:31 |
bshum |
I guess it depends on how comfortable you are separating the parts of the sprint into separate bugs |
| 14:31 |
bshum |
We could track the whole sprint as a blueprint and link it to each bug separately. |
| 14:31 |
bshum |
I guess it's more about how the final code will be presented. |
| 14:32 |
berick |
well, for the initial round of testing, i'm considering maintaining maybe a simple list somewhere. there will be /lots/ of little things. opening LP's for each would be cumbersome, imo. |
| 14:32 |
dbs |
#info dbs, Laurentian University |
| 14:32 |
berick |
after that's settled down, though, then we can leverage LP in the usual fashion |
| 14:33 |
bshum |
Sounds reasonable to me. |
| 14:35 |
dbs |
tpac-style |
| 14:35 |
bshum |
Next release meaning 2.7? |
| 14:35 |
RoganH |
I'm in favor of that. |
| 14:36 |
berick |
it will mean we really need to get in the websockets testing |
| 14:36 |
dbwells |
+1 to including as a preview |
| 14:36 |
bshum |
+1 to preview |
| 14:36 |
berick |
bshum: right, next meaning 2.7 |
| 15:01 |
kmlussier |
My concern is that this bug ultimately has big impacts on end users, and they aren't really seeing the nuances of the different approaches. |
| 15:01 |
RoganH |
kmlussier: I'm willing to read through it all. I agree it's important. I'm willing to do so and post about it but with the caveat that I might need correction if I miss a nuance. |
| 15:02 |
RoganH |
(Actually that seems likely that I will.) |
| 15:02 |
kmlussier |
For end users (not the devs), it might also be useful to have side-by-side screenshots of what happens. But, although I have done numerous screenshots in my last round of testing, I don't have much for what the original approach did. |
| 15:03 |
|
shadowspar joined #evergreen |
| 15:03 |
|
berick joined #evergreen |
| 15:03 |
|
JLDAGH joined #evergreen |
| 15:19 |
kmlussier |
I've been tied up in meetings all week and have just been catching up on this discussion. The rec_descriptor issue that bshum raised, where is that causing performance problems? I know reports was one of the areas bshum was poking at, but I'm confused as how it's related to holds. |
| 15:19 |
eeevil |
that's really just for reports, though. step 2 is use attr_flat instead |
| 15:20 |
jeff |
kmlussier: it can also slow down the hold permit checks at check when opportunistic hold capture takes place |
| 15:20 |
eeevil |
kmlussier: for cases where we have to test 100 holds and find that none will work for this copy, it's adding too much time to checkin. for normal cases where we only need to test a couple, it's not noticeable |
| 15:20 |
jeff |
kmlussier: given an item on a bib with many holds, but the item is not eligible for any of the holds, it can lead to timeouts at checkin |
| 15:21 |
bshum |
Well, it's noticeable all the time if you care about milliseconds (like our new library using a SIP sorter for checkins) does. |
| 15:21 |
kmlussier |
So this is when the retarget checkin modifier is being used? |
| 15:30 |
eeevil |
so, age protection and transit distance restriction are likely drivers. bshum, do you use either of those? |
| 15:31 |
eeevil |
kmlussier: not ... exactly |
| 15:31 |
eeevil |
kmlussier: but, parts of "restrictive rule" will contribute to potentially exposing the issue |
| 15:31 |
bshum |
eeevil: No to age protection, and as for transit distance, sort of, but I have to do some more investigation on what that's actually doing if anything. |
| 15:32 |
bshum |
eeevil: I think we have some rules set with transit distance true and a value of 2 or something in the matrix |
| 15:32 |
bshum |
For stuff that was intended to be local pickup only or something. |
| 15:32 |
bshum |
Not sure if those were written correctly, actually, now that I'm looking at it again. |
| 15:38 |
bshum |
eeevil: This isn't exactly apples, but I did a hold permit test on our old production DB hardware (pre-mvf upgrade scripts, etc.) vs. live and it was something like 12-18 ms vs. 400-800 ms |
| 15:38 |
bshum |
When I ripped out the rec_descriptor bits in live, that went down to like 24 ms or so |
| 15:38 |
bshum |
So I'm not sure transit range hurt me as much as that did |
| 15:39 |
eeevil |
bshum: well, it's not that |
| 15:39 |
eeevil |
I'm not saying that the problem isn't mrd |
| 15:40 |
eeevil |
what I'm saying is that it only really matters when you have a very long list of holds to test for |
| 15:40 |
eeevil |
and none of them pass |
| 15:40 |
eeevil |
if the first one passes, we look no further and capture |
| 15:40 |
eeevil |
but if we have to roll through all 100, then the difference matters |
| 15:41 |
eeevil |
so, if we create a mat-view for mrd, this will likely not be an issue |
| 15:41 |
eeevil |
in the case I just described |
| 15:41 |
eeevil |
but my point is that 99.9% of the time, you're not in that situation where there are 100+ unfillable holds |
| 15:42 |
eeevil |
which, again, is not to say that we shouldn't fix this ... we have several options |
| 15:42 |
eeevil |
but just to say that we're not seeing across-the-board 2.6 fail because most of the time it doesn't matter |
| 15:43 |
eeevil |
transit range is what can cause all 100+ holds to fail for a given (distant) copy. that's a trigger that can expose a given staff user to this behavior |
| 15:44 |
eeevil |
but transit range isn't the cause ... does that make sense? |
| 15:44 |
kmlussier |
eeevil: I think that 99.9% number is highly dependent on the Evergreen site. You're probably right when it comes to the networks I work with, but aren't there a fair number of Evergreen sites that rely more heavily on transit distance rules? |
| 15:46 |
eeevil |
kmlussier: the transit distance restriction is just how the door opens. it doesn't mean that they will suffer from this |
| 15:46 |
eeevil |
they still need tons of /distant/ holds that are at the front of the "queue" (hold sort order) |
| 15:49 |
eeevil |
dbwells: indeed, just so |
| 15:49 |
dbwells |
bshum: eeevil: I am getting better plans using the view in the paste above (as described by eeevil). |
| 15:50 |
eeevil |
bshum: if that view replacement doesn't go far enough, a mat-view based on that is the next step, and that's as fast as mrd can get (faster than since 1.0, when it was a table) |
| 15:50 |
dbwells |
I was testing with dkyle's original query, since it was a handy case for testing, so I'll be interested to see if it also helps bshum's holds case. |
| 15:55 |
* bshum |
live tests, because hey, that's just how we roll now |
| 15:55 |
kmlussier |
bshum++ |
| 15:55 |
kmlussier |
bshum: Better you than me. :) |
| 16:04 |
hbrennan |
bshum: It's more exciting that way |
| 16:12 |
hbrennan |
and I didn't break anything! |
| 16:12 |
gsams |
woo! |
| 16:13 |
hbrennan |
Since all checkout limits were previously being regulated by the penalties we removed, I had to create some new limit sets and circ policies for our different groups |
| 16:13 |
bshum |
dbwells: eeevil: Only a preliminary test, but I applied that new paste for the rec_descriptor and asked jventuro to run a test report using a fixed field data element. It ran successfully. |
| 16:13 |
hbrennan |
First time since Equinox set up our policies that anyone has touched them... I didn't even have permission to view them yesterday |
| 16:14 |
bshum |
She's going to test another one (the report that we know broke for sure) while I did back out the original find_hold_matrix_matchpoint function to retest my case. |
| 16:14 |
hbrennan |
so equinox++ too |
| 16:14 |
hbrennan |
since they were humored more than anything by our situation this morning |
| 16:15 |
gsams |
hbrennan: that was more or less what I was with mine. It wasn't the first time I had seen it myself anyway, I have bshum to thank for that one though. |
| 16:20 |
hbrennan |
I I struggled with printing a page of the policies, because it was cutting off #15 on the list |
| 16:20 |
hbrennan |
so I had to screenshot it |
| 16:21 |
kmlussier |
But there is filtering on those screens now. Makes them much easier to use. |
| 16:23 |
bshum |
dbwells: eeevil: Yes, the permit hold test is faster now with the new rec_descriptor view in place. Or at least not above 50 ms, so reasonable. |
| 16:23 |
eeevil |
cool ... simple CREATE OR REPLACE VIEW upgrade script, then |
| 16:23 |
eeevil |
dbwells++ |
| 16:23 |
eeevil |
bshum++ |
| 17:12 |
gsams |
which is a bit less than helpful in our case |
| 17:12 |
gsams |
I'm not really sure where this would be going wrong though |
| 17:18 |
|
berick joined #evergreen |
| 17:19 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
| 18:47 |
|
hbrennan joined #evergreen |
| 19:16 |
|
hbrennan joined #evergreen |
| 21:02 |
|
GtownJoe joined #evergreen |
| 01:17 |
|
bmills joined #evergreen |
| 04:09 |
|
remingtron_ joined #evergreen |
| 05:20 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
| 05:27 |
bshum |
Bleh, old report template running super long because it seems to rely on metabib.rec_descriptor for getting item type. That's annoying :( |
| 05:27 |
bshum |
View of view of view |
| 05:27 |
bshum |
Bleh |
| 13:05 |
|
hbrennan joined #evergreen |
| 13:11 |
|
kbeswick joined #evergreen |
| 13:21 |
eeevil |
bshum: a view that uses select-clause subqueries against record_attr_flat to simulate rec_descriptor would be an improvement ... we could skip the hstore step in that case |
| 13:23 |
bshum |
eeevil: Sounds logical. Fwiw crosstab is definitely super slow so far in my local testing so that seems a deadend. |
| 13:24 |
bshum |
eeevil: In reading your reply and other places rec_descriptor gets used, I'm suddenly wary of circ/hold matrix |
| 13:24 |
bshum |
With the lookup functions I mean. |
| 13:25 |
eeevil |
nah, it ends up /only/ doing the hstore<->record dance for the one record's data |
| 13:26 |
eeevil |
reports are a problem because a join is not necessarily (or even likely) going to use an index lookup on the record id column from mrd |
| 13:26 |
eeevil |
the circ/hold stuff does |
| 16:59 |
berick |
bshum: yep |
| 16:59 |
bshum |
The client was understandably impatient and gave me a network error while I waited for it. |
| 17:01 |
|
jwoodard joined #evergreen |
| 17:03 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
| 17:04 |
|
mmorgan left #evergreen |
| 17:05 |
|
mrpeters left #evergreen |
| 17:09 |
gsams |
trying to export marc records using the marc_export tool in 2.3.5 right now, our newest library seems to not get pulled with --library, just runs for 60 seconds and quits with nothing |
| 17:11 |
gsams |
is there something that I am missing to get marc_export to see the new library by its shortname? |
| 17:14 |
bshum |
gsams: What's the full command that you're attempting to use? |
| 17:17 |
|
ldwhalen_mobile joined #evergreen |
| 17:18 |
|
ningalls joined #evergreen |
| 17:24 |
bshum |
I think it's bombing out while running through the retarget permit test on all the holds. |
| 17:25 |
bshum |
None of the 122 unfilled holds are for the library checking it in, or item owning. |
| 17:25 |
bshum |
And the rules don't allow the item to be holdable by the other libs. |
| 17:25 |
bshum |
But because there's so many holds, it just kills the time |
| 17:26 |
bshum |
Sigh |
| 17:28 |
|
kmlussier joined #evergreen |
| 17:34 |
jeff |
i blame commit ae9641 |
| 17:34 |
pinesol_green |
[evergreen|Thomas Berezansky] Nearest Hold: Look at 100 instead of 10 holds - <http://git.evergreen-ils.org/?p=Evergreen.git;a=commit;h=ae9641c> |
| 17:34 |
jeff |
who came up with that idea and said it was alright, anyway? ;-) |
| 17:35 |
jeff |
`` Jeff Godin claims they have done this and it has produced no issues for them.'' |
| 17:35 |
gmcharlt |
jeff: you should have a serious chat with that fellow |
| 17:35 |
* jeff |
ducks |
| 17:36 |
jeff |
if checkin modifier "speedy" is set, consider (and test) only ten holds... otherwise, do 100. if checkin modifier "exhaustive" set, check all holds... ;-) |
| 17:36 |
bshum |
Heh |
| 17:37 |
bshum |
I'm wondering if my problems might be PG 9.3 related |
| 17:37 |
jeff |
bshum: in one case where i was looking at things (and i think i was talking out loud here also) i found that the hold permit tests seemed to be running twice at checkin. no idea if that's been that way for a while, or if it's avoidable. |
| 17:37 |
jeff |
you might pull on that thread a little if you're digging into this, though. |
| 17:38 |
bshum |
2542c713ea4a0d686d7b7ceae352929b60a80806 since it mentions PG 9.3 and hold matrix functions |
| 17:38 |
pinesol_green |
[evergreen|Mike Rylander] LP#1277731: Disambiguate parameter/column names - <http://git.evergreen-ils.org/?p=Evergreen.git;a=commit;h=2542c71> |
| 17:38 |
bshum |
But really, I think I'm just running on fumes now :) |
| 17:41 |
dbwells |
gsams: One thing to consider is that --library specifies the owning_lib, not the circ_lib. Not sure if that is a factor in your scenario. |
| 17:42 |
gsams |
dbwells: that is actually what I want, I plan to use marc_export as a temporary measure for Boopsie uploads for this library |
| 17:44 |
dbwells |
gsams: if you search asset.call_number, do you find rows where the owning_lib matches the id for NRH from actor.org_unit? |
| 17:47 |
bshum |
So |
| 17:47 |
bshum |
A retarget test |
| 17:47 |
bshum |
Time pre-upgrade DB: 13 ms. Time on production: 500+ ms |
| 17:49 |
dbwells |
gsams: Also, if it runs exactly for 60 seconds, that smells like a timeout issue. I'd consider checking what query is running in the DB when the script exits and go from there. |
| 18:06 |
gsams |
dbwells: asset.call_number returns 162k rows for that owning lib |
| 18:15 |
dbwells |
gsams: I'm heading out in just a few minutes, but if you want to explore the timeout angle, you could try passing a second argument to the $recids json_query in marc_export. Something like '{timeout => 600, substream => 1}' (that might not be exactly right, and I can't recall if 'substream' is necessary, but something like that). |
| 01:03 |
|
artunit_ joined #evergreen |
| 03:56 |
|
b_bonner joined #evergreen |
| 03:56 |
|
mtcarlson_away joined #evergreen |
| 05:04 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
| 06:33 |
|
Callender joined #evergreen |
| 07:23 |
|
artunit joined #evergreen |
| 07:44 |
|
akilsdonk joined #evergreen |
| 15:01 |
bshum |
eeevil: Hmm, the uncontrolled table has nothing in there either |
| 15:02 |
bshum |
metabib.uncontrolled_record_attr_value |
| 15:02 |
bshum |
Assuming that's the one you mean |
| 15:03 |
jeff |
jboyer-isl: i was about to ask you something starting with "in your testing", but then i saw your comment. :-) |
| 15:03 |
bshum |
eeevil: Hmm, might it be that because "multi" is still set to TRUE for that definition, that's a potential issue? |
| 15:05 |
jeff |
jboyer-isl: i like your approach of "don't let these API calls transfer captured holds at all" |
| 15:05 |
jeff |
jboyer-isl: it gets around the issue of inconsistent ahcm when opting not to retarget (but still transferring) the captured holds |
| 15:17 |
* jeff |
eyes OpenILS::Application::Cat::Merge::merge_records as potentially dead code |
| 15:18 |
jeff |
jboyer-isl: what methods have you used to clean up after this -- as you described it -- "havoc"? :-) |
| 15:20 |
jboyer-isl |
Since we don't know about it until after it's happened, usually the staff have already either checked the item out to someone or placed a new hold. We haven't yet gone to great lengths to correct anything. I think it was only reported a couple of times between our figuring out what was going on and my applying that basic patch. |
| 15:24 |
jeff |
neither old-school merge nor in-db merge seem to have paid any attention to title holds (though i didn't go digging too far back to confirm on the old-school method) |
| 15:30 |
jeff |
jboyer-isl: and the staff client appears to be the only thing calling open-ils.circ.hold.change_title[.specific_holds], and i don't see any references to OpenILS::Application::Circ::Holds::change_hold_title[_for_specific_holds] so i think that answering the question of "what's the impact when these holds are left on the old bib but their copies are moved" and "add a join to skip copies with in-transit holds also" followed by some testing should help bring |
| 15:38 |
|
rjackson-isl joined #evergreen |
| 15:38 |
|
jboyer-isl joined #evergreen |
| 15:39 |
jeff |
I can see two scenarios where Transfer Title Holds can be used. In one, there are no planned changes to copies or bibs, we're just moving the hold or holds. In the other, we've transferred (possibly as part of a merge) or are about to transfer copies to another bib. |
| 15:40 |
bshum |
Doh, array_to_string(array_agg()) crept in again over string_agg() |
| 15:40 |
bshum |
Just saw it part of function metabib.reingest_record_attributes() |
| 15:42 |
jeff |
In the second, I don't know that we care if the already-captured holds stay behind on the old title record unless a) it will break fulfillment / transit completion -> available -> fulfillment or b) the hold gets out of its captured status for some reason, in which case it'll likely end up being an unfillable hold |
| 16:11 |
|
Callender joined #evergreen |
| 16:12 |
eeevil |
kmlussier / bshum: vandelay can match against random tag data without record attrs being involved (see: isbn), so ISTM there should be a way without involving the existing attr defs |
| 16:13 |
RoganH |
bshum: short email sent to dev list. We'll see where the discussion goes. :) |
| 16:13 |
kmlussier |
eeevil: The record attrs was what was recommended to me by an ESI developer back when we were testing the Vandelay development because the standard Vandelay matchpoints required that a subfield be used. |
| 16:15 |
eeevil |
kmlussier: gotcha. this is a solvable issue, probably very simply |
| 16:17 |
eeevil |
bshum: looking at the code, record_attr_flat /does/ include uncontrolled values |
| 16:17 |
eeevil |
so, I was wrong up there -^ |