Time |
Nick |
Message |
01:36 |
|
mtate joined #evergreen |
04:38 |
|
RBecker joined #evergreen |
04:50 |
|
Mark__T joined #evergreen |
05:19 |
|
dbwells_ joined #evergreen |
07:09 |
|
timf joined #evergreen |
07:29 |
|
jboyer-isl joined #evergreen |
07:58 |
|
collum joined #evergreen |
08:10 |
|
_bott_ joined #evergreen |
08:23 |
|
akilsdonk joined #evergreen |
08:24 |
|
Dyrcona joined #evergreen |
08:27 |
|
rjackson-isl joined #evergreen |
08:28 |
pastebot |
"Dyrcona" at 64.57.241.14 pasted "bshum: Last night's Marque.pm of all bibs with holdings" (6 lines) at http://paste.evergreen-ils.org/23 |
08:29 |
Dyrcona |
So, 2 hours and 7 minutes to dump all of our bibs with holdings. |
08:29 |
Dyrcona |
Four records were too large, and likely not exported. |
08:29 |
csharp |
Dyrcona: so how long did it take before? |
08:31 |
Dyrcona |
With marc_export from support scripts, it took so long that I never let one finish, over 24 hours in every case. |
08:31 |
csharp |
wow |
08:31 |
csharp |
ours takes 48 hours |
08:31 |
* csharp |
would like to test Marque.pm |
08:32 |
bradl |
csharp: speaking of speed, have you guys thought any more about SSD? |
08:32 |
* csharp |
loves that name |
08:32 |
Dyrcona |
Marque.pm is not the final version. It's basically a test bed/proof of concept. |
08:32 |
csharp |
bradl: we have thought about it, but it will probably be next year before we can do anything about it ;-) |
08:32 |
Dyrcona |
The final version will be ready for 2.6. |
08:33 |
csharp |
Dyrcona: well, if you want a tester, just let me know |
08:33 |
|
kbeswick joined #evergreen |
08:33 |
Dyrcona |
I don't recall if we got SSDs for the performance evaluation server or if we stuck with our tried and 146GB SAS drives. |
08:33 |
bradl |
csharp: cool. I just get a little giggly about new hardware and am curious how it works for you (if you go that direction). |
08:33 |
csharp |
we're on 146 GB SAS drives too |
08:33 |
Dyrcona |
s/(and )/$atrue/ |
08:34 |
csharp |
bradl: sure, I'll keep you in the loop |
08:34 |
Dyrcona |
I just got an email from NewEgg this morning offering $50 off a 1TB SSD. |
08:34 |
Dyrcona |
I didn't realize that SSDs were available that large already. |
08:35 |
Dyrcona |
Crazy: I installed an update for libyaz4 yesterday, and today Update Manager is telling me that libyaz5 wants to install. |
08:36 |
paxed |
sounds a bit premature ... :P |
08:37 |
Dyrcona |
Eh, whatever. |
08:37 |
Dyrcona |
I'm going to update my VM today, too, so might as well try testing z39.50 with libyaz5. |
08:38 |
Dyrcona |
I use IndexData's repo for debs by way of explanation. |
08:39 |
|
Shae joined #evergreen |
08:41 |
bshum |
Dyrcona: libyaz5 as a client seemed fine so far in my light testing. |
08:42 |
Dyrcona |
bshum: I want to test it with the server. |
08:42 |
bshum |
I haven't gotten to try it with the server part yet. |
08:42 |
bshum |
Cool. |
08:43 |
bshum |
Also, super exciting for Marque test. |
08:44 |
Dyrcona |
Yeah. I figure if you want to use Marque.pm for Boopsie, then go ahead. |
08:44 |
Dyrcona |
It all has to be installed by hand, but the final version in the branch on lp1223903 will do all the right things. |
08:45 |
Dyrcona |
bug 1223903 |
08:45 |
pinesol_green |
Launchpad bug 1223903 in Evergreen "marc_export with --items is too damned slow and other things" (affected: 2, heat: 10) [Undecided,New] https://launchpad.net/bugs/1223903 |
08:45 |
Dyrcona |
There we go. |
08:45 |
Dyrcona |
I am going to split it up into real modules so it can be reused in custom scripts. |
08:46 |
Dyrcona |
marc_export in support-scripts will become an example of using it. |
08:46 |
bshum |
Dyrcona++ |
08:47 |
bshum |
I suspect this will be useful for our quarterly exports for autographics too (state ILL stuff) |
08:47 |
Dyrcona |
I think I'll try another dump as XML to see if the size limit applies there, too. |
08:47 |
Dyrcona |
EBSCO want binary MARC, so I guess I'll need to tell them that our four most popular bibs, i.e. with the most copies, don't export in that format. |
08:47 |
|
mrpeters joined #evergreen |
08:50 |
Dyrcona |
Hey! Fun. US Gov't shutdown affects syrup users, and anyone looking up MARC information online: http://www.loc.gov/home2/shutdown-message.html |
08:50 |
Dyrcona |
Syrup apparently looks stuff up on the LOC website. |
08:51 |
Dyrcona |
We don't use Syrup, I just heard about it from Syrup users on a mailing list. |
08:53 |
Dyrcona |
Bet their z39.50 is off line, too, and I bet we get a Launchpad bug or an internal RT ticket about it. |
08:53 |
|
mmorgan joined #evergreen |
08:54 |
Dyrcona |
<politics>Isn't reassuring that Congress makes sure that they get paid, so they can continue to screw the rest of us over?</politics> |
08:55 |
Dyrcona |
should have been an "it" in there somewhere. :) |
08:55 |
Dyrcona |
Anyway, back to Evergreen related thingies.... |
08:56 |
paxed |
MURICA YEAH is pretty much the laughingstock of the world right now. |
08:56 |
Dyrcona |
paxed: only right now? I thought it was ever since we let the clown steal the election in double aught? |
08:57 |
paxed |
well, that's old stuff, this is new today/yesterday... |
08:57 |
Dyrcona |
It's happened before, several times. |
08:57 |
Dyrcona |
Gov't shutdowns are old news. :) |
08:57 |
Dyrcona |
So are political coups for that matter. |
08:58 |
paxed |
sure, last was in '96 or so, but this is news for today/yesterday/etc |
08:59 |
Dyrcona |
I don't get the foreign infatuation with Obama. |
08:59 |
bshum |
Dyrcona: Not sure how much dbs managed to mirror before the shutdown but there's http://stuff.coffeecode.net/www.loc.gov/marc/ |
08:59 |
Dyrcona |
He's worse than Nixon who was worse than Shrub. |
08:59 |
mrpeters |
oh wow, i didnt even think about loc.gov being shutdown |
09:00 |
Dyrcona |
Now, I'm seriously done with politics in this channel for today. |
09:01 |
Dyrcona |
Fortunately, we don't have ops, so no one can swing the ban hammer. :) |
09:01 |
* Dyrcona |
starts another export of records in XML format. |
09:01 |
bshum |
Some of us could. |
09:01 |
bshum |
:P |
09:02 |
Dyrcona |
Oh, really. I didn't think we had ops for various reasons. |
09:02 |
* bshum |
doesn't care enough this morning. |
09:02 |
Dyrcona |
Well, now I know. |
09:02 |
Dyrcona |
I figured if I were worth banning that would have happened ages ago. :) |
09:03 |
Dyrcona |
Think I'll play some Tom Waits.... |
09:03 |
bshum |
I've always been tempted to promote pinesol_green and have some real fun with roulette. ;) |
09:03 |
Dyrcona |
tsbere and I were saying that roulette ought to temporary kick ban some one when it hits a "loaded" chamber. |
09:05 |
jeff |
if the bot had channel operator status, it kicks the nick in question from channel. |
09:05 |
Dyrcona |
yeah, with maybe a 1 hour ban. |
09:05 |
jeff |
if it does not, it makes reference to "who put a blank in here?" |
09:05 |
Dyrcona |
Blanks can kill. |
09:05 |
jeff |
i'd just unload the module in question if it were solely up to me. i'm not a fan. |
09:06 |
jeff |
and for not being a fan, i know far too much about the module. :P |
09:06 |
jeff |
there's probably a word for that somewhere. |
09:06 |
Dyrcona |
Whee! the dump is 1.4GB. |
09:06 |
Dyrcona |
That's in binary format. |
09:06 |
bshum |
Dyrcona: That's awesome. |
09:07 |
bshum |
And horrific. |
09:07 |
Dyrcona |
tmux |
09:07 |
Dyrcona |
hah. wrong window |
09:07 |
jeff |
Dyrcona: as of yesterday, loc Z39.50 was still up, but various software/sites were breaking due to DTDs and XSLT URLs (think MARCXML, MODS) being broken -- redirecting to a 200 OK HTML body |
09:08 |
Dyrcona |
I haven't tried z39.50 this morning. Maybe I will out of curiosity. |
09:08 |
Dyrcona |
Hmm. Marque.pm doesn't like it if you misspell format as fromat. ;) |
09:09 |
|
jeff___ joined #evergreen |
09:09 |
jeff |
better. |
09:09 |
Dyrcona |
jeff: Your evil twins is acting up again. :) |
09:09 |
Dyrcona |
s/twins/twin/ |
09:10 |
jeff |
fallout from the netsplit/etc yesterday |
09:10 |
Dyrcona |
me spill chucker woks grate. i needle gamma chicken. |
09:11 |
jeff |
heh. collectionhq is taking a page from tadl's jasperreports. they just now added a feature where on some reports the item barcode values print as barcodes that you can scan. |
09:12 |
Dyrcona |
In the spirit of "there's an app for that:" There's a Perl module for that. |
09:13 |
Dyrcona |
CPAN++ |
09:27 |
|
kmlussier joined #evergreen |
09:37 |
|
yboston joined #evergreen |
09:41 |
|
ericar joined #evergreen |
09:54 |
|
ccsc joined #evergreen |
09:55 |
phasefx |
@later tell smyers_ I'm usually around during the middle of the day, EST |
09:55 |
pinesol_green |
phasefx: The operation succeeded. |
09:58 |
ccsc |
We are experiencing a very strange problem: we can bill a patron up to $4.99 but if we try to bill more then $4.99 we get the error message that follows: method=open-ils.circ.money.billing.create params=["fce5eba2a15a372868989c6245cbd1b2",{"__c":"mb","__p":["4.99",null,"Copies",null,"",null,null,null,"1947","103","1"]}] THROWN: {"payload":[],"debug":"osrfMethodException : *** Call to [open-ils.circ.money.billing.create] failed for |
09:58 |
ccsc |
I thought we had a max setting somewhere - but the max fine settings seem to be fine |
10:00 |
ccsc |
any ideas as to what settings could be causing this problem |
10:06 |
phasefx |
don't have an idea focused around a specific dollar amount, but am curious, can you can create multiple bills, or is it the total that seems to matter? |
10:09 |
ccsc |
it is the total that is the issue |
10:09 |
ccsc |
I can add as many items as I want to until the total gets to $4.99 |
10:13 |
jeff |
i wonder if you have a group penalty threshold defined, and it's trying to add the standing penalty PATRON_EXCEEDS_FINES (id 1) but that for whatever reason doesn't exist in config.standing_penalties. |
10:13 |
jeff |
or... something similar. |
10:15 |
|
RoganH joined #evergreen |
10:17 |
ccsc |
ok - let me check that |
10:30 |
jeff |
your logs likely have more details -- even your error message was truncated when you pasted it to irc. |
10:31 |
jeff |
you may have a failed call to the database function actor.calculate_system_penalties, or you might have a specific line in the perl code that is pointed at by an error message in your opensrf logs. |
10:34 |
ccsc |
ok - it seems like this is not just a setting - so I think I will have to send this issue to our tech support group. |
10:36 |
phasefx |
anytime you see a nasty error, something should be fixed. the handling of the error if nothing else |
10:37 |
* Dyrcona |
whistles.... |
10:37 |
Dyrcona |
We got a lotta fixin' to do. :) |
10:39 |
ccsc |
thank you all for your input |
10:40 |
Dyrcona |
<in a Ricky Ricardo voice>phasefx, you gota lotta 'splainin' to do. :) |
10:42 |
pastebot |
"Dyrcona" at 64.57.241.14 pasted "Fun with ISBNs!" (33 lines) at http://paste.evergreen-ils.org/24 |
10:49 |
|
rfrasur joined #evergreen |
10:55 |
rfrasur |
government websites that only run on IE? sounds about right |
10:58 |
bshum |
Dyrcona: For fun I ran that and came up with 41100 rows returned. |
10:58 |
senator |
rfrasur: ugh. for 2004 maybe. |
10:58 |
phasefx |
Dyrcona: yeah, we do. But if we're starting over... ;) |
10:59 |
rfrasur |
senator: I'm determined to not be judgmental today...just to see if it's possible. But, I agree with you. |
11:14 |
bshum |
Dyrcona: I wish there was a way of identifying which bibs were the bad ones when attempting to export larger than the MARC spec allows. |
11:16 |
bshum |
I keep misremembering that the record length info isn't actually the bib ID of the offending bib. |
11:16 |
bshum |
MARC-- |
11:18 |
|
rjackson-isl joined #evergreen |
11:19 |
* csharp |
suspects "too many" holdings attached |
11:19 |
bshum |
Usually that's the case, yeah. |
11:20 |
bshum |
I just wish the warnings gave better hints is all |
11:22 |
rfrasur |
beacon++ |
11:27 |
bshum |
Dyrcona: I tried dumping a huge record (with 700+ holdings) using marcxml and it didn't blink an eye at all. |
11:27 |
bshum |
So it's only crummy USMARC that dies. |
11:34 |
tsbere |
bshum: I assume because XML mode doesn't have field-limited max record lengths ;) |
11:34 |
bshum |
tsbere: Pretty much what I figured too |
11:47 |
|
jdouma joined #evergreen |
11:55 |
|
artunit joined #evergreen |
11:57 |
|
smyers_ joined #evergreen |
12:03 |
Dyrcona |
bshum++ # for testing before I get to ti. |
12:03 |
Dyrcona |
it |
12:07 |
Dyrcona |
bshum: We might be able to trap the MARC::File error with an eval and print the bib id. |
12:08 |
* Dyrcona |
was away from his desk for the past hour or more, chatting with a coworker about Evergreen issues and our new member library's first day. |
12:09 |
rfrasur |
Dyrcona: stupid question that I should know so apologies. Are you migrating a library? Or were you? How's it going? I know you have new ones...just not sure HOW new. |
12:09 |
Dyrcona |
rfrasur: Yes, we added the Groton Public Library http://www.gpl.org/ this past weekend. yesterday was their first day live on our Evergreen system. |
12:10 |
Dyrcona |
MVLC now has 36 members! Evergreen has a new library! |
12:11 |
* Dyrcona |
realizes that is kind of a big deal. |
12:11 |
kmlussier |
MVLC++ |
12:13 |
csharp |
MVLC++ |
12:15 |
bshum |
Yay! |
12:27 |
smyers_ |
phasefx: phasefx_ Hey are you around to talk about your experiance with lp 1161122 moving a xul page to tt2 |
12:27 |
pinesol_green |
Launchpad bug 1161122 in Evergreen "rejiggered staff client patron search, summary, display" (affected: 3, heat: 16) [Wishlist,Triaged] https://launchpad.net/bugs/1161122 |
12:27 |
phasefx |
smyers_: I'll be around in an hour, lunch time now |
12:28 |
smyers_ |
phasefx: ok thanks |
12:29 |
|
dMiller__ joined #evergreen |
12:31 |
smyers_ |
Dyrcona: out of curiosity how large is the catalog that gpl.org is searching? |
12:32 |
rfrasur |
MVLC++ # It IS a big deal. Congrats! |
12:33 |
Dyrcona |
smyers_: MVLC has 957,974 non-deleted bibs. |
12:33 |
smyers_ |
thanks |
12:33 |
smyers_ |
search speed is great |
12:33 |
bshum |
Well, it is scoped too |
12:33 |
jeff |
should groton appear in the "please select your pickup library" list here? http://groton.mvlc.org/eg/opac/massvc |
12:34 |
jeff |
or does Groton not participate in the massvc/ursa system? |
12:34 |
Dyrcona |
They may not be participating since the stat is moving to autographics in December. |
12:35 |
Dyrcona |
state |
12:35 |
* tsbere |
was never given any virtual catalog ids for groton and as such did not add them to the list |
12:35 |
rfrasur |
That's a nice size library |
12:36 |
jeff |
tsbere: they appear to be GROT :-) |
12:36 |
csharp |
GROT++ |
12:36 |
rfrasur |
Do they also have a relatively new director? |
12:36 |
Dyrcona |
rfrasur: Yes. |
12:36 |
tsbere |
jeff: But that might not be correctly configured. They were part of the VC *before* they joined us, after all. <_< |
12:36 |
jeff |
tsbere: *nod* |
12:36 |
rfrasur |
Big undertaking. |
12:37 |
Dyrcona |
Yeah. I spent my summer on it. |
12:37 |
rfrasur |
Dyrcona++ |
13:04 |
|
dMiller_ joined #evergreen |
13:20 |
phasefx |
smyers_: I'm back |
13:21 |
smyers_ |
phasefx: thanks I am in a meeting atm should be done in ~10min |
13:22 |
bshum |
Tag, you're it. |
13:32 |
smyers_ |
phasefx: back |
13:32 |
phasefx |
smyers_: cool; how can I help you? |
13:34 |
smyers_ |
to sum up why I am reaching out to you is this, I have been asked to speed up the holdings matinance page and my initial resarch is that xul runners xml parsing is the reason for the slow speeds |
13:34 |
smyers_ |
so I was wanting to transform the page into a tt2 page |
13:34 |
smyers_ |
you had just done something simular and I wanted to pick your brain on the amount of time it took you to complete and any pitfalls that you can warn me about |
13:36 |
smyers_ |
phasefx: ^ |
13:36 |
phasefx |
hrmm, no real pitfalls.. there was a tradeoff in functionality |
13:38 |
phasefx |
having all the data client side gives you very customizable views of the data, but at the expense of pulling and parsing that data. Though you could do similar server-side and just refresh the page on each change |
13:38 |
Dyrcona |
staff_at_GPL++ |
13:39 |
phasefx |
not sending all that data over the wire and just sending what gets displayed is obviously faster, so I'm a fan. It's also much easier to do I18N in our TT environment; you don't even to have to really think about it |
13:40 |
smyers_ |
phasefx: thanks and how long did it take you start to finish? |
13:40 |
Dyrcona |
Template Toolkit 2 sure is handy, and it beats trying to figure Perl's arcane format syntax. |
13:40 |
phasefx |
don't recall, but it wasn't a lot of time compared to the original implementation |
13:41 |
Dyrcona |
<<<@<<@***<<$ |
13:41 |
smyers_ |
ok |
13:41 |
smyers_ |
thanks that answers the questions I needed |
13:42 |
phasefx |
smyers_: happy to help |
13:53 |
eeevil |
smyers_: it'd be interesting to see any evidence that xml parsing is the cause of slowness. I'd be surprised, because parsing XUL and parsing HTML are done be the same code in xulrunner, and holdings maint is pretty slim on actual XML |
13:53 |
eeevil |
but I've been surprised before :) |
13:54 |
smyers_ |
eeevil: so what leads us to believe that is just the initial investigation not guaranteeing it to be the root cause but certianly smells like it is |
13:54 |
eeevil |
smyers_: and, remember, holdings main is at least an order of magnitude more complicated than the patron summary display ... so you'll want to factor that, and the fact that phasefx knows the staff client code really well (he wrote much of it) into your time estimates |
13:54 |
phasefx |
it's just raw size of the data coming over the wire for some records, IMO. A non-paged interface, lots of data retrieved but not actually displayed |
13:55 |
phasefx |
and javascript being single-threaded doesn't help |
13:55 |
smyers_ |
eeevil: what we found is that selecting a book with over 800 holds we could get the information from the db in .8 seconds but took 15 seconds to load |
13:55 |
eeevil |
smyers_: do you have timing data for the different stages of page rendering? that's generally how I attack slow interfaces |
13:56 |
csharp |
smyers_: 800 holds or 800 volumes/copies? |
13:56 |
eeevil |
smyers_: but it's not getting xml there, right? it's getting JSON-as-text ... I just want to be sure I'm understanding what you're measuring and identifying |
13:57 |
smyers_ |
eeevil: it's getting xml from the db |
13:58 |
smyers_ |
csharp: 356 volume copies |
13:58 |
smyers_ |
csharp: 356 volume/copies |
13:58 |
phasefx |
holdings maintenance doesn't get XML |
13:58 |
phasefx |
other than the XUL itself |
13:59 |
phasefx |
but that should be cached |
13:59 |
smyers_ |
unapi.bre returns xml |
14:00 |
smyers_ |
i have to run I would be happy to talk about this more in an hour or so |
14:00 |
phasefx |
smyers_: you're making me think we're talking about different things |
14:00 |
phasefx |
holdings maintenace, aka copy_browser* No unapi there |
14:00 |
eeevil |
smyers_: the unapi stored procs do, but that's not how the holdings maint interface gets its data |
14:10 |
depesz |
eeevil: got 5 minutes? |
14:11 |
eeevil |
aye |
14:11 |
rfrasur |
jeff:If you'd like to talk reports for a few minutes, let me know. I pretty much have the whole afternoon that I can work with. |
14:12 |
eeevil |
depesz: sure thing |
14:12 |
depesz |
eeevil: so - i looked at the evergreen.ranked_volumes function |
14:12 |
depesz |
let me paste some example code |
14:13 |
depesz |
http://depesz.privatepaste.com/82ebcbeadf - that's the original |
14:14 |
depesz |
http://depesz.privatepaste.com/350bbaf122 -> that's the new one |
14:14 |
eeevil |
depesz: I'm looking at master now ... at a real computer today :) |
14:14 |
depesz |
YAY :) |
14:15 |
depesz |
eeevil: anyway for some particular set of data, and query - original functions does it's work in ~ 400ms, and new one in ~ 16ms |
14:16 |
depesz |
of course 400ms is not that bad, but - when I tested pg logs for all slow queries, the one that used the most time was call to "unapi.bre()" function - not sure if this is part of evergreen. |
14:16 |
eeevil |
it is |
14:16 |
depesz |
when I profiled it - 98% of its runtime was call to unapi.holdings_xml, which in turn had bulf of its time from evergreen.ranked_volumes |
14:17 |
depesz |
so - while the immediate speedup is perhaps not all that great, it looks like it should optimize very often (in this system) called query. |
14:17 |
eeevil |
oh, no, this is a great amortized savings! |
14:17 |
jeff |
depesz: purely as a history/naming data point, bre is another way of spelling "biblio.record_entry", and unapi is a reference to Evergreen's implementation related to unAPI, http://unapi.info/ |
14:17 |
depesz |
thanks jeff. |
14:18 |
depesz |
eeevil: the thing is - my change is simply inlining *two* functions in the evergreen.ranked_volumes |
14:18 |
depesz |
actor.org_unit_descendants and evergreen.rank_ou |
14:18 |
* phasefx |
likes simple changes |
14:18 |
eeevil |
now, I notice that you didn't get rid of all the instances of actor.org_unit_descendants and related |
14:18 |
depesz |
additionally - i removed "evergreen.rank_ou(aou.id, $2, $6), evergreen.rank_cp_status(acp.status)" from select clause, as it wasn't used. |
14:19 |
depesz |
checking |
14:19 |
eeevil |
there's still one call to that func, and one to actor.org_unit_descendants_distance |
14:19 |
depesz |
yes, but this one apparently didn't show as slow |
14:20 |
depesz |
if you want/need I can provide version of the function with this also inlined |
14:20 |
eeevil |
well, the reason the outer one is slow is the subselect in the param list, I imagine |
14:20 |
eeevil |
defeats the IMMUTABLE flag, I'd bet |
14:21 |
depesz |
it's not immutable |
14:21 |
depesz |
it's "stable" |
14:21 |
depesz |
not 100% safe, still, though. |
14:21 |
eeevil |
well, in practice it's safe |
14:21 |
eeevil |
the OU tree doesn't, as a rule, change |
14:22 |
depesz |
well - you know the db much better than I do :) |
14:22 |
eeevil |
there are external (non-db) things that must be updated when it does |
14:22 |
eeevil |
right |
14:22 |
eeevil |
I understand |
14:22 |
jeff |
skipping ahead, if inlining functions is a performance benefit, are there recommendations for maintenance of those functions? pre-processing the sql files to fill in macros or something, or... I may be skipping too far ahead here, or misunderstanding. |
14:22 |
eeevil |
I want to pass along context :) |
14:22 |
depesz |
anyway - is it possible to pull such change into evergreen? if yes - should I do something else/more, or is me reporting it in here good enough? |
14:22 |
|
adbowling-isl joined #evergreen |
14:23 |
eeevil |
depesz: it (not this one, per se, but in general) might very well be lost if it's only recorded in IRC |
14:23 |
depesz |
jeff: i would say that pre-processing might be good first step. as an final solution, I would imagine rewrite all those queries, to make sure that it's all optimized. |
14:23 |
eeevil |
the -dev mailing list would be best |
14:24 |
depesz |
eeevil: ok. will subscribe. should I send this function change request there too, or I can skip it for now? |
14:25 |
eeevil |
depesz: question ... is there a particular pattern that is defeating the inlining of SQL stored funcs? I feel pretty certain that some are inlined, but obv not that one |
14:25 |
eeevil |
and please do copy it there :) |
14:26 |
depesz |
eeevil: ok. will mail there soon(ish) |
14:26 |
eeevil |
depesz++ #more eyes! |
14:26 |
depesz |
as for auto-inlining. TBH, I'm not sure. I think that auto-inlining works only for simplest queries. |
14:26 |
eeevil |
(and more tuits! ;) ) |
14:26 |
depesz |
??tuits |
14:26 |
eeevil |
yeah, it may just be the CTE |
14:27 |
depesz |
in org_unit_descendants - yes. but the evergreen.rank_ou - doesn't have cte |
14:27 |
eeevil |
"round tuits" ... units of time to tackle optimization projects and the like. |
14:27 |
depesz |
thanks - english is not my native tongue. |
14:27 |
bradl |
http://en.wiktionary.org/wiki/round_tuit |
14:27 |
eeevil |
bradl: thanks |
14:27 |
bradl |
depesz: I am a native English speaker and I had to look that up the first time eeevil said it :) |
14:28 |
depesz |
:) |
14:28 |
phasefx |
ha, I had no idea the etymology there |
14:28 |
bradl |
man, I feel like a librarian today |
14:29 |
* bradl |
drops mic, walks off stage |
14:29 |
csharp |
bradl++ |
14:29 |
* rfrasur |
loves round_tuits |
14:29 |
* Dyrcona |
has a round tuit attached to his desk via magnet. |
14:29 |
* csharp |
used to have several of the wooden coins |
14:29 |
rfrasur |
@love round_tuits |
14:29 |
pinesol_green |
rfrasur: The operation succeeded. rfrasur loves round_tuits. |
14:29 |
Dyrcona |
unfortunately, there are never enough of them. |
14:29 |
depesz |
eeevil: just to be sure, this list: http://libmail.georgialibraries.org/mailman/listinfo/open-ils-dev ? |
14:30 |
eeevil |
depesz: that's it |
14:31 |
|
kbutler joined #evergreen |
14:34 |
rfrasur |
Jeff, did you see my "reports" comment? |
14:41 |
|
RoganH joined #evergreen |
14:47 |
jsime |
i have a question about a pattern used by the current staff client for calling back to the opensrf layer (my apologies if i'm misinterpreting the log output) |
14:47 |
jsime |
as an example - logging in with the staff client, it looks like this triggers an iteration through many permissions and makes a separate call to check whether the logged in user has each one |
14:47 |
jsime |
something similar appears to be happening with settings, too (lots of CALL lines in the log for client timeout, default item price, materials processing free, courier code, etc.) |
14:48 |
jsime |
is there something that necessitates this pattern versus, say, one call that returns a permissions list for the user, or one call that returns their full settings list/map/struct? |
14:56 |
pastebot |
"Dyrcona" at 64.57.241.14 pasted "use directives from Marque.pm" (18 lines) at http://paste.evergreen-ils.org/25 |
15:00 |
jeff |
rfrasur: got it now. thanks! |
15:01 |
rfrasur |
If you have time. I'm inputting stuff...but can take a break when you're ready. |
15:18 |
rfrasur |
bshum++ #responsive EG site! That's pretty awesome. |
15:27 |
phasefx |
jsime: most of these services/methods don't keep the patron data handy, so retrieve what they need at the time they're being called. It's very much broken up, so that, for example, open-ils.auth only needs to know if a user has an appropriate login permission, and doing other permission checks at that time wouldn't help or be communicated to anything else |
15:27 |
phasefx |
does that make sense? |
15:28 |
bshum |
rfrasur: Thanks, I tried to pick a good theme that worked well across many devices. |
15:28 |
phasefx |
jsime: there are permission check methods that allow you to query multiple perms at once, however, but they're not widely used |
15:30 |
rfrasur |
bshum: You did a good job. It was a pleasant surprise as I was trying to fit things on my desktop. |
15:34 |
|
zerick joined #evergreen |
15:37 |
jeffdavis |
hm, searching an org lasso via SRU ( /opac/extras/sru/LASSO/holdings?version=1.1&operation=searchRetrieve&query=hemingway&maximumRecords=0 ) returns 0 results, yet searching the 2 lasso'd org units individually returns 36 and 24 results, respectively |
15:42 |
jsime |
phasefx: that makes sense - i guess the red flag it raised for me was the possibility that a similar pattern of "get a list of references and then iterate through that to make subsequent calls for each individual object" might be in use elsewhere |
15:42 |
jsime |
it seemed like that might actually be what was/is happening, for example, with patron search results |
15:42 |
depesz |
eeevil: i mailed, but got it back with subject: "Subject: [OPEN-ILS-DEV] ***SPAM*** Introduction and possible optimization for database function" |
15:43 |
kmlussier |
depesz: The message made it to the list. |
15:43 |
depesz |
no idea why I got the "SPAM" marker |
15:44 |
Dyrcona |
depesz: That is mostly random it seems. |
15:44 |
eeevil |
depesz: the spam filtering is ... odd on those list :) |
15:44 |
phasefx |
jsime: there very well may be needless redundancy in places, particularly in the staff client |
15:44 |
depesz |
ah, ok. |
15:44 |
eeevil |
depesz: it did come through, though |
15:45 |
Dyrcona |
depesz: BTW, I am Jason Stephenson of MVLC. In case you didn't know. We're hosting the server you are using. |
15:46 |
depesz |
ah, good to know, Dyrcona :) |
15:46 |
depesz |
i knew about kmlussier |
15:49 |
jsime |
phasefx: would that be more an issue of the staff client not making use of methods in opensrf that can return bundled data in one pass, or that opensrf isn't currently exposing methods to do that? |
15:51 |
phasefx |
jsime: in the case of patron search, it's first getting a list of patron id's for the search results, and then it's fetching user data for each id for the result rows visible on the screen (and making permission checks for each of those calls). If we changed it to stream results to the client from one call, we could save some redundancy/overhead that way |
15:51 |
eeevil |
depesz: rank_ou is used in exactly 4 places, all sourced from that same file (990.schema.unapi.sql), so that one in particular is a good candidate for general inlining ... perhaps retaining the logic in a comment, and pointing to that from the inlining sites |
15:53 |
phasefx |
jsime: the reason why it's doing it the way it's working now, is that streaming wasn't supported originally, and the UI would lock up (single-threaded javascript) if you tried to send too much data over the wire in one shot |
15:54 |
* rfrasur |
would like to mention that Anna Goben in EI is flippin' awesome. Good gravy. |
15:54 |
phasefx |
jsime: does that help? I admit I don't grok exactly what you say, but hope I'm communicating a useful gist of things |
15:57 |
jsime |
phasefx: that does help, yes - the confirmation, at the very least, that i wasn't wildly off-base in my log reading and pcaps from the vm i'm running the staff client on |
16:06 |
phasefx |
jsime: lock up or timeout, even |
16:14 |
* Dyrcona |
is an idiot. |
16:14 |
* rfrasur |
doubts that but if it makes you feel better |
16:14 |
* Dyrcona |
has lost a lot of his work from the past week+. |
16:15 |
rfrasur |
You're still not an idiot. |
16:15 |
rfrasur |
:p |
16:15 |
Dyrcona |
Has definitely lost most of his work today. |
16:16 |
rangi |
git clean -df |
16:16 |
rangi |
? |
16:16 |
rangi |
ive done that once before |
16:16 |
Dyrcona |
No. |
16:16 |
Dyrcona |
an alias called sync2disk. |
16:16 |
Dyrcona |
I used it to copy changed files from a usb stick to my laptop's drive. |
16:16 |
Dyrcona |
unfortunately it does rsync with --delete |
16:17 |
Dyrcona |
so, when the usb stick is older than the laptop drive, destructive things happen. |
16:17 |
Dyrcona |
I then spent an hour ripping out ownCloud blaming some of my file weirdness on that. |
16:18 |
rfrasur |
yikes |
16:18 |
Dyrcona |
I also ran a backup script that has the same --delete in it, so anything done today is definitely gone. |
16:18 |
Dyrcona |
I can recover older things from the backup server that backs up my backups. |
16:18 |
rangi |
ouch |
16:18 |
rfrasur |
redundancy++ |
16:18 |
rangi |
have you seen obnam? |
16:19 |
Dyrcona |
I had an analogous thing happen yesterday when I tripped my own security at home and locked myself out of the LAN. |
16:19 |
* rfrasur |
has decided this week is just about survival...everything else is gravy |
16:20 |
rangi |
http://liw.fi/obnam/ (sorta on topic) |
16:21 |
Dyrcona |
I don't have time for something new right now. |
16:21 |
Dyrcona |
Feeling rushed is what got me into these messes. |
16:21 |
rangi |
in that case http://gtdfh.branchable.com/ :-) |
16:22 |
|
ktomita_ joined #evergreen |
16:22 |
* rfrasur |
goes to, yet again, remind teens why they can't hang out on the steps even if we're closed. |
16:22 |
rangi |
(i used to sit next to lars at work, and am a big fan of his stuff) |
16:24 |
jeffdavis |
Dyrcona: if it makes you feel better, I recently discovered the hard way that my old laptop has a BIOS supervisor password that I don't know... |
16:24 |
bshum |
senator: Should I mark https://bugs.launchpad.net/evergreen/+bug/1153755 as superseded by the new bug you created since you went a bit further with it? |
16:24 |
pinesol_green |
Launchpad bug 1153755 in Evergreen 2.4 "Numberic bib call number search contains a typo" (affected: 1, heat: 6) [Low,Triaged] |
16:25 |
* rfrasur |
instantly regrets telling aforementioned teens she didn't care if they hung out by the entrance near her office. |
16:25 |
* rfrasur |
cares. |
16:25 |
senator |
bshum: oh sure, never saw that |
16:25 |
senator |
my verbiage, if you can parse it, should incidentally explain why you never really saw a difference with Callender's branch |
16:26 |
senator |
although it should have worked too |
16:26 |
|
Dyrcona left #evergreen |
16:26 |
bshum |
senator: Yeah I saw the changes for the actual index in yours. I'm not entirely sure what I was testing back then and all. It seems like forever ago now. |
16:26 |
senator |
no, that's separate |
16:26 |
bshum |
senator: That and bibcn is like the worst search in a consortium that doesn't share any call numbers uniformly :D |
16:27 |
bshum |
So it hasn't been a huge priority on my end of things. |
16:27 |
senator |
sure, sure, not saying it needs to be |
16:27 |
bshum |
I'll fix the bugs, thanks for clarification :) |
16:28 |
senator |
no, thank you |
16:53 |
depesz |
it looks like asset.copy is parent table for serial.unit |
16:53 |
depesz |
this raises two questions: is it always the case (or just my instance), and - are there always only these 2 tables connected with inheritance (as opposed to: sometimes there are more tables inheriting from asset.copy) |
16:54 |
depesz |
^^ eeevil, or anyone actually |
16:54 |
senator |
depesz: serial.unit is the only child table of asset.copy |
16:54 |
depesz |
and it's always a child? i.e. in all instances of evergreen? |
16:55 |
senator |
there are other inheritance relationships among other tables, of course |
16:55 |
senator |
serial.unit got introduced around evergreen 2.0, and was from its beginning always a child of asset.copy, yes |
16:57 |
depesz |
great. and no other children of asset.copy. makes my life a bit easier |
16:57 |
bshum |
Related to serial.unit, we never got a decision out of https://bugs.launchpad.net/evergreen/+bug/1152753 |
16:57 |
pinesol_green |
Launchpad bug 1152753 in Evergreen 2.4 "Serial Units won't go into Copy Buckets" (affected: 4, heat: 18) [Medium,Confirmed] |
16:59 |
eeevil |
depesz: for background, a serial.unit is an "extended" asset.copy. it simplifies the code side a ton to be able to look at everything through the asset.copy lens except in the very narrow and special cases where the serial.unit stuff matters. |
16:59 |
depesz |
ok. thanks. |
17:14 |
|
mmorgan left #evergreen |
17:22 |
depesz |
i ahve this query: http://depesz.privatepaste.com/a7a47d8f77 |
17:23 |
depesz |
any anyone of you tell me more about it? as in: is the circ_lib list fixed, or dependant on installation? is the number of items there common to be ~ 11, or can vary, and how much? |
17:24 |
bshum |
depesz: circ_lib depends greatly on the installation. |
17:24 |
depesz |
bshum: ok. what about number of items in the list? |
17:24 |
bshum |
I would expect those to correlate with specific libraries defined in the actor.org_unit table (they should be IDs in that table) |
17:25 |
depesz |
in this table i have 333 rows, but only 11 ids are listed in the shown query ? |
17:25 |
bshum |
I'm not sure what that means :) |
17:25 |
bshum |
Though it looks weird to me that it's got dupe numbers in it. |
17:25 |
depesz |
i wouldn't worry about dupes, these are of no consequence. |
17:27 |
senator |
that's from new_book_list method in the supercat service. it take an arbitrary org unit as input and gets a list of its descendants before it builds the query you're seeing |
17:27 |
bshum |
Well, as written, it just looks like a query to find recent bibs with holdings created from a specific set of locations? |
17:27 |
|
dMiller__ joined #evergreen |
17:27 |
|
smyers_ joined #evergreen |
17:28 |
senator |
so there could be any number of those ids from 1 to a couple hundred, depending both on the installation and on the scope at which the user wants to search |
17:28 |
depesz |
ok. would you say that it's going to be usually < 11, usually more than 11, usually more than 50 ? |
17:28 |
depesz |
hmm .. ok. |
17:30 |
|
dkyle joined #evergreen |
17:42 |
|
timhome joined #evergreen |
18:03 |
|
ktomita joined #evergreen |
18:08 |
|
dMiller_ joined #evergreen |
18:11 |
|
dMiller__ joined #evergreen |
18:39 |
|
dMiller joined #evergreen |
18:41 |
|
dMiller_ joined #evergreen |
21:01 |
|
kbeswick joined #evergreen |
21:24 |
|
jdouma joined #evergreen |