Time |
Nick |
Message |
01:17 |
|
bmills joined #evergreen |
04:09 |
|
remingtron_ joined #evergreen |
05:20 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
05:27 |
bshum |
Bleh, old report template running super long because it seems to rely on metabib.rec_descriptor for getting item type. That's annoying :( |
05:27 |
bshum |
View of view of view |
05:27 |
bshum |
Bleh |
06:37 |
|
b_bonner joined #evergreen |
06:37 |
|
mtcarlson_away joined #evergreen |
06:40 |
|
Callender joined #evergreen |
07:11 |
|
ktomita_ joined #evergreen |
08:00 |
|
akilsdonk joined #evergreen |
08:02 |
|
Dyrcona joined #evergreen |
08:04 |
|
jboyer-isl joined #evergreen |
08:07 |
|
wjr joined #evergreen |
08:21 |
|
mrpeters joined #evergreen |
08:23 |
|
rjackson-isl joined #evergreen |
08:33 |
|
ericar joined #evergreen |
08:40 |
|
remingtron__ joined #evergreen |
08:41 |
|
ktomita joined #evergreen |
09:21 |
|
_bott_ joined #evergreen |
09:24 |
|
yboston joined #evergreen |
09:37 |
|
kbeswick joined #evergreen |
09:59 |
|
phasefx joined #evergreen |
10:00 |
|
jwoodard joined #evergreen |
10:25 |
* Dyrcona |
wonders if there ought to be a unique index on cbrebi.bucket, cbrebi.target_biblio_record_entry |
10:25 |
* Dyrcona |
sees absolutely no reason that the same target_biblio_record_entry should appear more than once in the same bucket. |
10:26 |
Dyrcona |
It would also help track down whatever is putting multiple entries in patron book bags, since that would blow up when it happens. |
10:28 |
eeevil |
Dyrcona: I don't see a reason (combined with a use of cbrebi.pos) that we should forbid multiple entries generally ... perhaps "unique ... where pos is null" |
10:29 |
eeevil |
or unique across entry and pos, if pos defaults to something (don't recall if it does) |
10:29 |
Dyrcona |
eeevil: Seeing as pos is pretty much always NULL, unless set specifically in a script that populates the bucket. I see no reason for pos to remain. |
10:30 |
Dyrcona |
eeevil: pos defaults to nothing, ie. null. |
10:30 |
eeevil |
Dyrcona: ordered lists of things ... think: foundation of a "netflix queue" |
10:30 |
Dyrcona |
eeevil: That'd be nice if the client actually filled it in, but nothing does unless you script it yourself outside the client, afaict. |
10:30 |
eeevil |
(that's why pos was added in the long ago ... as infrastructure to encourage that when someone cared enough) |
10:31 |
Dyrcona |
Well, no cares, apparently. |
10:31 |
eeevil |
you're correct, nothing uses it today |
10:31 |
Dyrcona |
no one cares. |
10:31 |
jeff |
you have a use case in mind for an item to be in a bucket twice, but at different positions? |
10:31 |
eeevil |
meh, no one cares /enough/... people mention it from time to time |
10:32 |
Dyrcona |
ok. I'll grant the "enough." :) |
10:32 |
* phasefx |
always wanted a "container is a set" flag or something |
10:32 |
Dyrcona |
So, if pos stays, it gets added to the unique. |
10:32 |
Dyrcona |
I wouldn't mind that. |
10:33 |
eeevil |
jeff: I can certainly imagine catalogers or collections folks using it. shared containers for, say, book clubs ... I just don't see globally restricting it as something models reality |
10:34 |
Dyrcona |
I'm tempted to make these changes in production just to get bug reports about adding to book bags blowing up so I can "see" what is going on. |
10:34 |
jeff |
Dyrcona: query for new instances of dupes and go to logs, or have you already gone that route? |
10:35 |
eeevil |
Dyrcona: maybe something like: create unique index ... ( _entry, coalesce(pos,-1)); |
10:35 |
Dyrcona |
eeevil: When the OPAC pulls up book bags, the duplicates are excluded because of what AppUtils->bib_container_items_via_search does. |
10:35 |
Dyrcona |
So, public bookbags won't show dupes. |
10:36 |
Dyrcona |
Err, public buckets when viewed via the OPAC, rather. |
10:36 |
jeff |
eeevil: ah, that gets around my next question of how to constrain |
10:37 |
jeff |
better than something like defaulting pos to -1 |
10:37 |
eeevil |
phasefx: I like that, but it means using a trigger instead of an index for the constraint ... which is both not unprecedented and not that hard |
10:37 |
Dyrcona |
jeff: I hate trying to find crap in the logs. With about 10,000 log entries per second, it isn't much use. |
10:39 |
eeevil |
Dyrcona: how would you feel about containers having to opt in to being a bag instead of a set ... say, by container type, using a trigger to implement what phasefx suggests? |
10:39 |
* eeevil |
can't recall if we have a table of container types or just an enum/constraint |
10:40 |
eeevil |
it's more complicated, obv, but it leaves doors open |
10:41 |
Dyrcona |
eeevil: I would not be opposed though it is more complicated. That said, I think we should go for simple. |
10:41 |
Dyrcona |
Anyway, I'm still tempted to clean up the dupes via script and throw a constraint on to get error reports, so I have some new instances to check in the logs. |
10:42 |
* Dyrcona |
will likely not do that, though. |
10:42 |
eeevil |
Dyrcona: so, start with "create unique index ... ( _entry, coalesce(pos,-1));", and if the rule of "use pos if you want dupes" is to constricting, we re-evaluate? |
10:42 |
Dyrcona |
I can go along with that. |
10:43 |
eeevil |
cool... COMPROMISE FTW! ;) |
10:43 |
Dyrcona |
The cleanup script will be a pain if it is in SQL, though. |
10:43 |
eeevil |
I'll help with that if you like |
10:44 |
Dyrcona |
OK. |
10:45 |
Dyrcona |
It will need to determine which bucket item survives, and then "merge" the notes (if any) somehow. |
10:45 |
Dyrcona |
Maybe a couple of withs and/or a DO block. |
10:49 |
jeff |
first, see how many dupes have notes. |
10:50 |
jeff |
because if that number is zero or small, you can skip a query to "merge" the notes. |
11:02 |
eeevil |
Dyrcona: http://pastie.org/9280371 |
11:04 |
eeevil |
Dyrcona: could use window functions, but the brute force is enough there, I think ... oh, and you'll want to add the container id where ever entry is used in those queries |
11:05 |
Dyrcona |
eeevil: I'll take a look at what you shared. |
11:29 |
bshum |
@hate reports |
11:29 |
pinesol_green |
bshum: But bshum already hates reports! |
11:34 |
|
akilsdonk joined #evergreen |
11:36 |
|
vlewis joined #evergreen |
11:36 |
jboyer-isl |
debugging upstart is almost as fun as debugging javascript. The best way I've found to get it to straighten out it's state is to just reboot the machine. :/ |
11:37 |
dbs |
@decide upstart or systemd |
11:37 |
pinesol_green |
dbs: If all else fails, try this: http://en.wikipedia.org/wiki/Rubber_duck_debugging |
11:38 |
Dyrcona |
dbs: upstart |
11:38 |
dbs |
Dyrcona: bzzt, debian went with systemd (wisely!) :) |
11:38 |
Dyrcona |
I mean, if you must choose one of those, take the less busted option. |
11:39 |
* Dyrcona |
detects a note of sarcasm. |
11:39 |
dbs |
No sarcasm. Debian chose systemd and ubuntu is going to follow. |
11:39 |
Dyrcona |
Systemd is the antithesis of the UNIX way. |
11:39 |
Dyrcona |
I know. |
11:40 |
Dyrcona |
sytemd is like the "one ring" of init systems. |
11:40 |
Dyrcona |
And about as evil, IMHO. |
11:40 |
dbs |
Sometimes it's better not to be religious about particular ways |
11:41 |
Dyrcona |
Sometimes, it's better to just use what works. Nothing broken in BSD init or SysV init. |
11:41 |
Dyrcona |
dbs: I suggest you google for Linus' rant against the systemd devs. |
11:41 |
dbs |
Oh, I've read his rant. Linus likes to rant. Nothing new there. |
11:42 |
bshum |
I think we're going to need to rewrite metabib.rec_descriptor to try getting at the information more directly. |
11:42 |
bshum |
I'm concerned because a report that was built ages ago, looking for the Item Type of a bib record |
11:42 |
bshum |
Is still running after 10+ hours |
11:43 |
bshum |
And I have a sinking feeling it's because of the changes made to metabib.record_attr |
11:43 |
bshum |
I thought about changing up the template too, but I haven't yet found another way of accessing the type of a record yet. |
11:43 |
Dyrcona |
dbs: I think "religion" is what got systemd chosen by Debian. |
11:46 |
bshum |
Does anyone else have an opinion on the best way to poke at the problem before I start toying around? |
11:48 |
jboyer-isl |
Dyrcona: I'll heartily agree that there's nothing wrong with BSD's rc system (NetBSD's, at least) but I'd take just about anything over SysV. |
11:48 |
Dyrcona |
bshum: metabib.record_attr is nearly useless since MVF. use metabib.record_attr_flat instead. |
11:49 |
bshum |
Dyrcona: Yeah that's what I was thinking to try writing a new view that builds off that. |
11:49 |
Dyrcona |
At least its a view on a table, and not a view on a view. ;) |
11:49 |
jboyer-isl |
And I do like some things about upstart, but tracking down problems is no fun. |
11:49 |
Dyrcona |
jboyer-isl: If tracking down upstart problems is no fun, try systemd. ;) |
11:51 |
* dbs |
disengages |
11:53 |
bshum |
Hmm, maybe crosstab() |
11:56 |
|
ericar joined #evergreen |
12:15 |
bshum |
Oh, haha |
12:16 |
bshum |
I guess we've crossed this ground before: http://markmail.org/message/dzo7fsvtwz2bq2nl |
12:16 |
bshum |
With dbwells and others |
12:18 |
|
akilsdonk joined #evergreen |
12:19 |
|
mmorgan joined #evergreen |
12:19 |
|
DPearl joined #evergreen |
12:50 |
|
ldw joined #evergreen |
12:58 |
|
bmills joined #evergreen |
13:05 |
|
hbrennan joined #evergreen |
13:11 |
|
kbeswick joined #evergreen |
13:21 |
eeevil |
bshum: a view that uses select-clause subqueries against record_attr_flat to simulate rec_descriptor would be an improvement ... we could skip the hstore step in that case |
13:23 |
bshum |
eeevil: Sounds logical. Fwiw crosstab is definitely super slow so far in my local testing so that seems a deadend. |
13:24 |
bshum |
eeevil: In reading your reply and other places rec_descriptor gets used, I'm suddenly wary of circ/hold matrix |
13:24 |
bshum |
With the lookup functions I mean. |
13:25 |
eeevil |
nah, it ends up /only/ doing the hstore<->record dance for the one record's data |
13:26 |
eeevil |
reports are a problem because a join is not necessarily (or even likely) going to use an index lookup on the record id column from mrd |
13:26 |
eeevil |
the circ/hold stuff does |
13:27 |
bshum |
Hmm, gotcha. |
13:49 |
|
ericar joined #evergreen |
14:22 |
|
dluch joined #evergreen |
14:31 |
|
kmlussier joined #evergreen |
14:35 |
|
akilsdonk joined #evergreen |
15:29 |
Dyrcona |
jeff: Just to follow up from earlier and yesterday, I did a quick check and found that the duplicate cbrebi entries with notes only have notes on 1 entry. |
15:30 |
jeff |
interesting. |
15:31 |
jeff |
i think that makes sense, since you can only add notes after you've added the item to the list, right? |
15:31 |
jeff |
so, the list would be de-duplicated before giving you the opportunity to add a note. |
15:32 |
Dyrcona |
Right, but I don't think deduplication is explicit in the code. I think it is a side effect more than anything. |
15:32 |
* jeff |
nods |
15:32 |
jeff |
similar with the HTML view of lists. |
15:32 |
jeff |
(because it's a container search) |
15:33 |
Dyrcona |
Well, that's the same interface and code if you're talking about opac/myopac/list. |
15:34 |
Dyrcona |
But, yeah, bib_container_items_via_search dedupes because of the sort code. |
15:35 |
Dyrcona |
It uses the search results to sort, so the duplicates disappear at that point. |
15:35 |
jeff |
i was talking about /eg/opac/results?bookbag= more than /eg/opac/myopac/lists?bbid=20647 |
15:36 |
Dyrcona |
Ok. It's still the same retrieval code, IIRC. |
15:36 |
jeff |
think so |
15:36 |
jeff |
different display :-) |
15:36 |
Dyrcona |
yeah |
15:36 |
Dyrcona |
So, I was only half wrong. :) |
15:36 |
|
kbeswick joined #evergreen |
16:02 |
hbrennan |
I just have to share my praises for the miracle of Evergreen's stable OPAC links... I just find it so wonderful that I can execute a complicated search, grab the link, and share the exact same set of OPAC results with someone else.... just amazing |
16:06 |
gmcharlt |
:) |
16:36 |
pastebot |
"Dyrcona" at 64.57.241.14 pasted "Query to delete duplicate container.biblio_record_entry_bucket_item" (59 lines) at http://paste.evergreen-ils.org/64 |
16:37 |
Dyrcona |
jeff & eeevil ^^ |
16:37 |
|
tater-laptop joined #evergreen |
16:38 |
jeff |
ooh. sirsidynix ncip message in the wild! |
16:40 |
|
dbs joined #evergreen |
16:40 |
|
pmurray joined #evergreen |
16:40 |
|
pmurray joined #evergreen |
16:40 |
|
csharp joined #evergreen |
16:40 |
|
jl- joined #evergreen |
16:40 |
|
dkyle joined #evergreen |
16:40 |
eeevil |
Dyrcona: that looks sane to me |
16:41 |
Dyrcona |
eeevil: thanks! I'll swap the rollback with a commit tomorrow and let it rip. |
16:41 |
Dyrcona |
in development of course! |
16:48 |
jeff |
hbrennan: all i can say to that is: Invalid search type selection |
16:48 |
jeff |
hbrennan: and perhaps: |
16:48 |
jeff |
Undetermined response error |
16:49 |
hbrennan |
That's pretty much what happened anytime I tried to save a Sirsi url for more than 2 minutes |
16:53 |
rangi |
that happens with most proprietary ILS |
16:54 |
|
Callender joined #evergreen |
16:54 |
hbrennan |
On a related note, is there a way to grab the url for a search within the staff client? I often forget that I'm in it, then I can't grab a link |
16:55 |
berick |
hbrennan: you can grab the url.. |
16:55 |
berick |
hbrennan: see the grey Debug option top-right, next to Print Page |
16:55 |
hbrennan |
yep |
16:55 |
berick |
click Modify URL |
16:55 |
hbrennan |
Ahh! |
16:55 |
berick |
wel.. |
16:55 |
|
artunit joined #evergreen |
16:56 |
berick |
you have to change the oils://remote/ |
16:56 |
hbrennan |
Ahh, yes |
16:56 |
berick |
part and replace it with http://your-domain/ |
16:56 |
hbrennan |
Still better than recreating the search in a browser after I've already done it in the staff client |
16:56 |
berick |
indeed |
16:57 |
hbrennan |
Will the browser-based "staff client" urls be the same way, or with our domain? |
16:57 |
berick |
the browser client will be all https:// |
16:57 |
hbrennan |
yessss |
16:57 |
hbrennan |
Wonderful |
16:57 |
hbrennan |
Thanks berick++ |
16:58 |
berick |
no prob |
16:58 |
hbrennan |
So nice to ask the person who's building that! |
16:59 |
bshum |
So |
16:59 |
bshum |
open-ils.circ: [INFO:20079:Transport.pm:163:140222333191811] Message processing duration: 65.027 |
16:59 |
bshum |
I think that's saying it took 65 seconds to check in an item. |
16:59 |
berick |
hbrennan: heh, i'm here all week! (not really, i'm off Friday) |
16:59 |
hbrennan |
:) |
16:59 |
berick |
bshum: yep |
16:59 |
bshum |
The client was understandably impatient and gave me a network error while I waited for it. |
17:01 |
|
jwoodard joined #evergreen |
17:03 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
17:04 |
|
mmorgan left #evergreen |
17:05 |
|
mrpeters left #evergreen |
17:09 |
gsams |
trying to export marc records using the marc_export tool in 2.3.5 right now, our newest library seems to not get pulled with --library, just runs for 60 seconds and quits with nothing |
17:11 |
gsams |
is there something that I am missing to get marc_export to see the new library by its shortname? |
17:14 |
bshum |
gsams: What's the full command that you're attempting to use? |
17:17 |
|
ldwhalen_mobile joined #evergreen |
17:18 |
|
ningalls joined #evergreen |
17:24 |
bshum |
I think it's bombing out while running through the retarget permit test on all the holds. |
17:25 |
bshum |
None of the 122 unfilled holds are for the library checking it in, or item owning. |
17:25 |
bshum |
And the rules don't allow the item to be holdable by the other libs. |
17:25 |
bshum |
But because there's so many holds, it just kills the time |
17:26 |
bshum |
Sigh |
17:28 |
|
kmlussier joined #evergreen |
17:34 |
jeff |
i blame commit ae9641 |
17:34 |
pinesol_green |
[evergreen|Thomas Berezansky] Nearest Hold: Look at 100 instead of 10 holds - <http://git.evergreen-ils.org/?p=Evergreen.git;a=commit;h=ae9641c> |
17:34 |
jeff |
who came up with that idea and said it was alright, anyway? ;-) |
17:35 |
jeff |
`` Jeff Godin claims they have done this and it has produced no issues for them.'' |
17:35 |
gmcharlt |
jeff: you should have a serious chat with that fellow |
17:35 |
* jeff |
ducks |
17:36 |
jeff |
if checkin modifier "speedy" is set, consider (and test) only ten holds... otherwise, do 100. if checkin modifier "exhaustive" set, check all holds... ;-) |
17:36 |
bshum |
Heh |
17:37 |
bshum |
I'm wondering if my problems might be PG 9.3 related |
17:37 |
jeff |
bshum: in one case where i was looking at things (and i think i was talking out loud here also) i found that the hold permit tests seemed to be running twice at checkin. no idea if that's been that way for a while, or if it's avoidable. |
17:37 |
jeff |
you might pull on that thread a little if you're digging into this, though. |
17:38 |
bshum |
2542c713ea4a0d686d7b7ceae352929b60a80806 since it mentions PG 9.3 and hold matrix functions |
17:38 |
pinesol_green |
[evergreen|Mike Rylander] LP#1277731: Disambiguate parameter/column names - <http://git.evergreen-ils.org/?p=Evergreen.git;a=commit;h=2542c71> |
17:38 |
bshum |
But really, I think I'm just running on fumes now :) |
17:38 |
bshum |
jeff: I do wonder about that sometimes. |
17:38 |
gsams |
bshum: sorry had to step away. I'm running marc_export --library NRH > NRHbibs.mrc |
17:39 |
gsams |
when I do it with my own library's shortname it works, I assume as expected |
17:39 |
gsams |
it got 45k+ records with that. |
17:41 |
dbwells |
gsams: One thing to consider is that --library specifies the owning_lib, not the circ_lib. Not sure if that is a factor in your scenario. |
17:42 |
gsams |
dbwells: that is actually what I want, I plan to use marc_export as a temporary measure for Boopsie uploads for this library |
17:44 |
dbwells |
gsams: if you search asset.call_number, do you find rows where the owning_lib matches the id for NRH from actor.org_unit? |
17:47 |
bshum |
So |
17:47 |
bshum |
A retarget test |
17:47 |
bshum |
Time pre-upgrade DB: 13 ms. Time on production: 500+ ms |
17:49 |
dbwells |
gsams: Also, if it runs exactly for 60 seconds, that smells like a timeout issue. I'd consider checking what query is running in the DB when the script exits and go from there. |
18:06 |
gsams |
dbwells: asset.call_number returns 162k rows for that owning lib |
18:15 |
dbwells |
gsams: I'm heading out in just a few minutes, but if you want to explore the timeout angle, you could try passing a second argument to the $recids json_query in marc_export. Something like '{timeout => 600, substream => 1}' (that might not be exactly right, and I can't recall if 'substream' is necessary, but something like that). |
18:16 |
dbwells |
e.g. my $recids = $editor->json_query({...big query hash...}, {timeout => 600, substream => 1}); |
18:17 |
dbwells |
That should give it 10 minutes, if I am remembering right. |
18:17 |
gsams |
dbwells:it doesn't appear to have either of those there in the file already. as far as I can tell anyway |
18:17 |
gsams |
I suppose I should ask, should it? |
18:18 |
gsams |
but I can definitely add that in after the qury |
18:18 |
gsams |
query* |
18:18 |
dbwells |
Right, the second argument hash is totally optional, and almost never there. |
18:19 |
gsams |
dbwells: alright, thanks for that. I will give that a whirl and see what happens! |
18:19 |
dbwells |
It seem surprising to me that that query could timeout, but the quit after exactly 60 seconds thing just makes me suspicious. |
18:20 |
dbwells |
Good luck! |
18:20 |
gsams |
dbwells: thank you! |
18:25 |
gsams |
dbwells++ |
18:26 |
gsams |
that was it |
18:26 |
dbwells |
:) |
19:21 |
|
edoceo joined #evergreen |
19:25 |
bshum |
So |
19:25 |
bshum |
I decided to monkey with action.find_hold_matrix_matchpoint |
19:26 |
bshum |
And ripped out all the rec_descriptor stuff |
19:26 |
bshum |
Since we don't use any of those in our current hold matrix rules |
19:26 |
bshum |
Doing so took an item that took 65+ seconds to checkin to only 11 seconds |
19:26 |
bshum |
Still very long, but.... it hurt less. |
19:26 |
bshum |
So it definitely is having horrible impact with metabib.rec_descriptor. |
19:27 |
bshum |
I've been thinking up more ways of rewriting it, but figured I'd try some shotgun approaches to see what happens |
19:52 |
gsams |
bshum: I like the sound of that for some reason |
19:52 |
gsams |
even though I'm not entirely sure what you are talking about |
19:52 |
bshum |
gsams: Oh trust me, I'm not sure anymore what I'm supposed to think at this point. |
19:53 |
bshum |
I've just been toying with the parts of our DB so much now trying to get every inch of ground I can. |
19:53 |
gsams |
If I had more understanding of what I was dealing with, I would probably be doing the same thing |
19:54 |
gsams |
I already do more than I should |
19:54 |
gsams |
with my level of understanding, I feel like I could break something any second |
19:54 |
hbrennan |
gsams: That's the fun level of permission! |
19:55 |
hbrennan |
high access, medium understanding :) |
19:55 |
gsams |
hbrennan: HA! Yeah that hit the nail on the head |
19:55 |
gsams |
I like to think of myself a jack of all trades with this |
19:56 |
bshum |
Well I know what I did isn't a "solution" |
19:56 |
gsams |
right, but if you don't indeed use it, it is a "temporary fix?" |
19:56 |
bshum |
But it proves to me that the performance hit is significant with 2.6 changes in the metabib views. |
19:56 |
gsams |
that too |
19:57 |
gsams |
I do have to say, I am thankful of how explanatory the commits are in git sometimes |
19:58 |
bshum |
Well, sure I could keep it in my database, but then I'll always be looking over my shoulder that we don't bash or build something worse on top of it :) |
19:58 |
bshum |
I'd rather "fix it" for everybody and avoid local customizations if I can. |
19:58 |
hbrennan |
Yay for bshum++ for sharing |
20:00 |
gsams |
indeed, I certainly prefer the sharing community approach myself. Hopefully one day I'll be able to commit some time to help with that myself as well |
20:00 |
hbrennan |
I'm trying to figure out why EG is telling third parties that a patron is blocked when they have the max checked out.... everything EG-side works for patrons |
20:00 |
hbrennan |
Is that in Standing Penalties (where I can't look because it's grayed out)? |
20:00 |
gsams |
That sounds like Standing Penalties to me |
20:00 |
hbrennan |
in other words, only SIP authentication things are blocked |
20:00 |
bshum |
Err, potentially. |
20:01 |
gsams |
what bshum said though |
20:01 |
gsams |
it could be that |
20:01 |
bshum |
I've never quite learned how SIP messages decide whether to say a patron is blocked or not. |
20:01 |
hbrennan |
The alert in EG is "Patron exceeds max checked out item threshold" |
20:01 |
hbrennan |
even though they only have 25 of the 25 allowed |
20:02 |
bshum |
Each penalty has different blocks in Evergreen associated with them |
20:02 |
bshum |
It's possible that maybe the penalty block on circ is treated in SIP as "don't let them do anything" |
20:02 |
gsams |
what third party are you talking about? |
20:02 |
hbrennan |
Overdrive, our public computer management |
20:03 |
hbrennan |
if you have the max allowed checkouts, you're blocked |
20:03 |
gsams |
the public computer management usually has the ability to either enforce or ignore blocks for certain things |
20:03 |
gsams |
at least I remember CASSIE having that ability |
20:03 |
hbrennan |
We "fixed" this by increasing the max allowed by 1, but that just makes a bigger mess |
20:03 |
hbrennan |
hey, we have CASSIE now |
20:04 |
gsams |
Let me check my server and see what setting might fix that issue |
20:04 |
hbrennan |
I don't think Overdrive is that smart though.. it's just giving me a general "blocked" |
20:04 |
bshum |
hbrennan: Out of curiosity, when someone has the max items out, do you want them blocked in Evergreen from different functions? Like checkout/holds/renews |
20:04 |
hbrennan |
Nope |
20:04 |
hbrennan |
and we've changed that |
20:04 |
bshum |
Or do you really only mean to use it to stop circ staff from giving out more stuff. |
20:05 |
hbrennan |
so patrons with max out can still renew and place holds |
20:05 |
hbrennan |
exactly |
20:05 |
bshum |
Well |
20:05 |
gsams |
it should just have like CIRC and FULFILL blocks on it when they hit that right? |
20:05 |
bshum |
Maybe this is a situation where not using the checkout penalties and going with the limit sets might be appropriate. |
20:05 |
hbrennan |
That's where we have it, in limit sets |
20:05 |
hbrennan |
or do we... |
20:05 |
gsams |
oh |
20:06 |
hbrennan |
shoot, I was just looking at way too many of those categories |
20:06 |
bshum |
Well, if you have it as a limit set, it wouldn't be any block message on the patron when you retrieve them. |
20:06 |
bshum |
The limit sets only show up when you move to check out beyond whatever it's limiting. |
20:06 |
bshum |
But even if you've implemented limit sets |
20:06 |
bshum |
Then you still have to remove the group penalty threshold |
20:06 |
hbrennan |
Okay, yes we want that pop-up as you try to check out the 26th thing |
20:07 |
hbrennan |
Oh, okay |
20:07 |
bshum |
And then remove the penalties that have already been put into place. |
20:07 |
bshum |
Otherwise, both will apply |
20:07 |
hbrennan |
And that will still regulte the limit, but we can chose to keep going with checkouts if overriding the message |
20:07 |
bshum |
And you really don't seem to need a real "block" with the penalty. |
20:07 |
hbrennan |
which we occassionally want to do |
20:07 |
bshum |
I believe that is true, with the limit sets. |
20:07 |
gsams |
yeah, that would work |
20:08 |
* bshum |
is going to wander off and find some food now, but hopes hbrennan has gotten some leads on how to get into more trouble ;) |
20:08 |
hbrennan |
awesome.. if only I had those permissions to view.... let me see if someone near me does |
20:08 |
gsams |
and it should stop both situations without having to mess with anything else |
20:08 |
hbrennan |
:) |
20:08 |
hbrennan |
I love that |
20:08 |
hbrennan |
brb |
20:09 |
gsams |
bshum++ #getting all the medium knowledge folks in trouble |
20:09 |
hbrennan |
haha |
20:09 |
gsams |
*medium understanding* meant to put it as hbrennan did |
20:14 |
hbrennan |
well, no one here who can touch it wants to (wimps!) :) |
20:14 |
hbrennan |
so I'll wait until the boss is back |
20:15 |
hbrennan |
I'll save my ++ing until it works, but thanks bshum and gsams for the push in the right direction |
20:24 |
|
kmlussier joined #evergreen |
20:59 |
hbrennan |
Not so good news.... there is no limit set to say "you can have 25 things total".. limit set rules are for individual circ mods |
21:00 |
hbrennan |
I still think there's a way, but it's not as easy |
21:04 |
bshum |
hbrennan: Out of curiosity, how many circ mods does your system have? |
21:04 |
hbrennan |
umm.... perhaps 20? |
21:04 |
hbrennan |
maybe less |
21:04 |
bshum |
Limit sets can include many circ mods, not just individually. |
21:04 |
hbrennan |
We tried adding up all the "others" |
21:04 |
hbrennan |
but it's letting me check out more than the max total |
21:04 |
hbrennan |
I noticed we didn't have a limit set rule for books, for example |
21:05 |
bshum |
So in theory, you could make a limit set of 25 and include all circ mods. |
21:05 |
hbrennan |
everything, not just the ones that aren't already listed? |
21:05 |
hbrennan |
Because we have a limit set for 5 videos, so we didn't add that to the new limit set |
21:06 |
bshum |
Each limit set works on their own terms. |
21:06 |
hbrennan |
My thought was that EG would get confused if we said "You can check out up to 5 videos"... you can check up to 25 videos, etc, etc" |
21:06 |
bshum |
No, it'll apply on their own. |
21:06 |
bshum |
So, a limit set of 5 for videos would stop at five videos |
21:06 |
hbrennan |
so instead of just including the half list of circ mods that didn't already have a limit set, we want a limit set with ALL the circ mods |
21:07 |
bshum |
A limit set of 25 books and videos, etc. Would stop at 25 of all those circ mods |
21:07 |
bshum |
So yes |
21:07 |
hbrennan |
not letting more than 5 videos, but continuing with books |
21:07 |
hbrennan |
ah ha... let me try that |
21:08 |
bshum |
Just add all the circ mods to 25 limit set and it ought to be equivalent to a global block on items |
21:08 |
bshum |
Err |
21:08 |
bshum |
Assuming every item has a circ mod I guess |
21:13 |
hbrennan |
hmmm.. it's letting me keep going after 25 |
21:13 |
hbrennan |
on checkous |
21:13 |
hbrennan |
checkouts* |
21:14 |
bshum |
Now, it might be a matter of making sure the limit set is linked to all the circ policies. |
21:14 |
bshum |
Or set to fall through and associated with a common rule |
21:14 |
hbrennan |
I'm reading something like that in the docs right now |
21:15 |
hbrennan |
Linked Limit Set |
21:23 |
hbrennan |
Welll..... added the new limit set to circ policy for Patrons (so not all users) |
21:24 |
hbrennan |
and now when trying to scan the 26th item I do get a pop-up |
21:24 |
hbrennan |
and it says Exceptions: no_matchpoint |
21:24 |
hbrennan |
getting closer |
21:25 |
hbrennan |
and to override i would need the permission open-ils.circ.checkout.full.override |
21:30 |
hbrennan |
uh-uh |
21:30 |
hbrennan |
no one can check out |
21:30 |
hbrennan |
Just broke the library |
21:32 |
bshum |
Eeps! |
21:34 |
hbrennan |
hehe |
21:34 |
hbrennan |
We un-did the circ policy linking with the limit set we created |
21:34 |
hbrennan |
we deleted the limit set we created for the total checkouts |
21:35 |
hbrennan |
still getting no_matchpoint for certain users |
21:35 |
bshum |
That sounds like a rule has changed. |
21:36 |
hbrennan |
yep... |
21:36 |
bshum |
And whatever was default has something added to it and it's no longer matching |
21:36 |
|
dbwells_ joined #evergreen |
21:37 |
hbrennan |
true |
21:51 |
hbrennan |
dang, well.... bshum++ and gsams++ anyway |
21:52 |
hbrennan |
I need to go help with the backed up checkout stations now :) |
21:52 |
hbrennan |
000081049 |
21:52 |
hbrennan |
woops |
21:52 |
bshum |
hbrennan: hope things calm down |
21:52 |
bshum |
And that all will be well later. |
21:53 |
hbrennan |
eh, we're closing in 10 minutes |
21:53 |
hbrennan |
so yeah :) |
21:53 |
hbrennan |
until tomorrow |
21:53 |
hbrennan |
I'm coming in early to call Equinox and confess |
21:57 |
bshum |
Good luck then. |
21:58 |
bshum |
Also hbrennan++ #bravely trying things. |
21:58 |
hbrennan |
haha thanks |
21:58 |
hbrennan |
wishlist: massive UNDO button |
21:59 |
dbs |
hbrennan: in theory that is point-in-time-recovery (PITR) of your database :) |
21:59 |
hbrennan |
dbs: Would that affect checkouts? |
21:59 |
dbs |
hbrennan: yep |
22:00 |
hbrennan |
eh |
22:00 |
dbs |
So probably not what you want. |
22:00 |
hbrennan |
Nope |
22:00 |
dbs |
But it _is_ a massive UNDO button! |
22:00 |
hbrennan |
I want to undo everything that staff poked around in in the past hour |
22:00 |
hbrennan |
haha |
22:04 |
|
DPearl1 joined #evergreen |
22:18 |
hbrennan |
Is it Friday yet? |
22:18 |
hbrennan |
:) |
22:18 |
bshum |
It's getting there. |
22:18 |
hbrennan |
Well, guess I'll get out of here.. even though my brain won't be leaving work.... but can't fix things until tomorrow |
22:19 |
hbrennan |
goodnight all! |
22:20 |
gsams |
goodnight! |
22:20 |
bshum |
Night |
22:20 |
gsams |
I was stuck up front, sorry about the woops |
22:20 |
gsams |
and just missed her haha |
22:31 |
dbs |
more for bshum: http://blog.wikimedia.org/2014/06/10/on-our-way-to-phabricator/ |
22:31 |
bshum |
dbs: Cool! |
22:34 |
|
gmcharlt joined #evergreen |
22:35 |
dbs |
bshum: that's how I first heard about it, actually (via wikimedia foundation) |
23:07 |
jeff |
oh, max items out ausp blocking SIP operations. yup, been there. |
23:12 |
jeff |
stock, any standing penalty with a CIRC block will return "charge privileges denied" as Y and will return a screen message of "blocked" |
23:13 |
jeff |
if memory serves and last i checked, etc, etc. |
23:20 |
bshum |
Yeah, that's what I suspected too. jeff++ |