01:15 |
|
zerick joined #evergreen |
04:09 |
|
gsams joined #evergreen |
04:16 |
|
berickm joined #evergreen |
04:45 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
07:11 |
|
collum joined #evergreen |
07:15 |
|
kbeswick joined #evergreen |
07:48 |
|
rjackson-isl joined #evergreen |
09:30 |
kmlussier |
eeevil: Following up on my question from yesterday, I have a record with 2 Located URI's: one for MBI and one for MCD. I set my preferred search library to MCD (not the system). I search the consortium and see both links as expected on https://jasondev.mvlcstaff.org/eg/opac/record/1554276, but MBI is still appearing first. |
09:30 |
kmlussier |
In this case, it should be showing the preferred library, MCD, right? |
09:30 |
kmlussier |
That is, it should be showing the preferred library first. |
09:33 |
eeevil |
kmlussier: that's what I would expect from the code in the repo, and what noble saw in testing ... let me see if we still have the test server up with that (though ALA has removed most of my human resources for things like knowing where an old test server might be ...;) ) |
09:33 |
|
ldw joined #evergreen |
09:34 |
kmlussier |
eeevil: OK, thanks. I could probably take a look on noble's server too. |
09:42 |
jeff |
in some ways, asset.copy rows are like tree rings. |
11:18 |
Bmagic |
As far as I can tell, our database has never had this function, oddly enough, it didnt seem to matter, all of the vandelay stuff worked. Looking back at the sql upgrade scripts, the last time it was introduced was 0738 around version 2.2.3 |
11:51 |
csharp |
Bmagic: right - I found that too - it didn't make it into any upgrade scripts (on the paths I've taken, anyway) |
11:51 |
Bmagic |
csharp: You don't have it in your production database either? |
11:51 |
csharp |
correct |
11:52 |
csharp |
but... we don't really use Vandelay at this point - we found it when testing acq record import |
11:53 |
Bmagic |
csharp: weird, the situation here is: we upgraded to postgres 9.2 using pg_dump evergreen instead of pg_dumpall, Vandelay was working just fine before the upgrade. Now, postgres is complaining about the 2 argument function not existing. Odd, but if I use pg_dumpall, it works fine...... puzzle anyone? |
11:53 |
csharp |
hmmm |
12:01 |
jboyer-home |
Bmagic: what are the flags you’re using for dump and dumpall? |
12:02 |
Bmagic |
pg_dump evergreen > db1_pgdump.sql |
12:03 |
Bmagic |
and that is what is in production now.... later, after finding that vandelay wasn't working. I used pg_dumpall -o > pgdumpall.sql |
12:04 |
Bmagic |
After testing the restore on a dev box from the dumpall, the vandelay is working (even without the 2 argument function) |
12:08 |
jboyer-home |
They were getting errors whether they tried to use match sets or not? |
12:10 |
hopkinsju |
jboyer-home: Yes, but the funny part is, if you dont' specify an import queue the import *does* work. It goes into a queue that gets labeled "-" |
12:15 |
jboyer-home |
Does this return anything on either system? select * from vandelay.queue where match_set is not null; |
15:03 |
jcamins |
jeff: I seem to recall the BOFH pioneering that feature. |
15:19 |
jeff |
assuming there's an account or two found, it makes it far less important to worry about getting the proper spelling of their name. |
15:34 |
|
akilsdonk_ joined #evergreen |
15:35 |
pinesol_green |
[evergreen|Kathy Lussier] Documentation for Located URI Visibility - <http://git.evergreen-ils.org/?p=Evergreen.git;a=commit;h=d5eb3a3> |
15:44 |
|
eeevil joined #evergreen |
15:44 |
|
Callender joined #evergreen |
16:04 |
|
tspindler left #evergreen |
16:57 |
|
tsbere_ joined #evergreen |
16:58 |
|
dreuther joined #evergreen |
17:10 |
|
mmorgan left #evergreen |
17:18 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
17:23 |
|
akilsdonk joined #evergreen |
17:25 |
|
jcamins_ joined #evergreen |
17:31 |
|
shadowsp1r joined #evergreen |
02:29 |
|
jeff_ joined #evergreen |
05:10 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
06:02 |
|
gmcharlt joined #evergreen |
06:54 |
|
Callender joined #evergreen |
07:36 |
|
collum joined #evergreen |
09:20 |
|
yboston joined #evergreen |
09:48 |
kmlussier |
Good morning all! |
09:51 |
jeff |
morning! |
09:53 |
kmlussier |
I'm curious abotu berick's code at bug 1308239. We had tinkered with the idea of using precats for ILL's from our statewide system, but in my testing, I found that Evergreen won't allow me to check out an existing precat. |
09:53 |
pinesol_green |
Launchpad bug 1308239 in Evergreen "Support targeting and fulfillment of precat copy holds (for ILL)" (affected: 1, heat: 6) [Wishlist,New] https://launchpad.net/bugs/1308239 |
09:53 |
kmlussier |
I'm wondering if that code would fix that issue. |
09:54 |
jeff |
I'm pretty sure we've had libraries that used precats for ILL, but I'm not certain of their workflow. They may not have been capturing and using the system for notification -- they might just have been calling the patron then checking out the item as a pre-cat (using the originating library's barcode, which is one of the reasons they moved away from that). |
09:56 |
|
krvmga joined #evergreen |
09:56 |
jeff |
I have interest in that feature / bugfix. I'm not grabbing the bug now, but will try to review when I can. |
09:56 |
kmlussier |
Our preference is to use precats, because the alternative is to use brief bib records that automatially deleted when the item is returned. But then it fills up the database with all of these deleted bib records. |
09:56 |
jeff |
kmlussier: That behavior is surprising, but I haven't tested it. |
09:57 |
krvmga |
we just upgraded to 2.5 and i'm getting complaints about the labeling of icons in search returns. i've been looking around but i can't see where to fix/alter the labels. anyone know off hand? |
09:57 |
jeff |
Yeah, we do that now with NCIP. It's not ideal compared with precats. |
09:57 |
jboyer-home |
I don’t know if ILL handling is consistent across Evergreen Indiana, but I know at my previous library they used the OCLC ILL number as the barcode, so there wasn’t a problem with re-using barcodes like that, then to keep things tidy the pre-cats are deleted later. |
13:53 |
jeff |
krvmga_: both both the ten and thirteen are there. |
13:54 |
krvmga_ |
jeff: since we started talking about it, one of our catalogers overlaid the record. :) |
13:54 |
krvmga_ |
she just told me. |
13:55 |
jeff |
the perils of testing theories on live systems. :-) |
14:00 |
|
DPearl1 joined #evergreen |
14:00 |
|
tspindler joined #evergreen |
14:01 |
|
krvmga_ joined #evergreen |
14:04 |
|
kbeswick_ joined #evergreen |
14:07 |
jeff |
Business::ISBN does the right thing and returns undef when you call as_isbn10 on a 979 isnb13 :-) |
14:13 |
jeff |
OpenILS::WWW::AddedContent::Amazon might fail on the 979 isbn13s, but probably not in a way that impacts anything else. |
14:19 |
|
bmills joined #evergreen |
14:23 |
|
tspindler left #evergreen |
14:23 |
|
tspindler joined #evergreen |
16:21 |
rangi |
the whole district has a pop of 30k .. its quite an amazing story really |
16:25 |
kmlussier |
I love seeing libraries that do so much to engage with their communities. Almost makes me wish I were working in a library again. Almost. |
16:32 |
|
tspindler left #evergreen |
16:53 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
17:04 |
phasefx2 |
bbl |
17:18 |
jeffdavis |
When upgrading a standalone db server from 2.4->2.6, it looks to me like there are no new dependencies that need to be installed. Can anyone confirm that? |
17:20 |
bshum |
jeffdavis: I haven't found anything specific yet. Though I think I did find that one of the upgrade scripts required an extra deb than before for me to run it on a fresh standalone DB. |
01:14 |
|
remingtron_ joined #evergreen |
01:14 |
|
dbwells_ joined #evergreen |
05:12 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
06:41 |
|
berickm joined #evergreen |
06:42 |
|
Callender joined #evergreen |
07:47 |
|
rjackson-isl joined #evergreen |
15:15 |
jboyer-home |
Bmagic: How many concurrent bibs is your parallel reingest running vs. the number of cores in your db? Depending on how hard it’s working, you may be pushing all of the necessary data out of cache and the db can’t keep up with searching and reingesting without the request timing out. We’ve had that problem during past upgrades. |
15:15 |
jboyer-home |
If you can eventually get results by performing the same search repeatedly that’s likely what’s going on. |
15:16 |
Bmagic |
jboyer-home: If we take the reingest out of the equation - the opac still does not return results. Which is what prompted me to do the ingest. I was thinking that perhaps I needed to reingest in order for the search results to start working. Is that right? |
15:18 |
jboyer-home |
Depends on how much of an upgrade you did and how much the search has changed. It’s possible that pre-reingest things can’t work, or it could also be that a cold start on your db takes a while to get runnning. (I think I have a 2.6 DB sitting around that hasn’t been reingested, but I don’t have any opensrf services pointing at it yet to test) |
15:24 |
|
gsams joined #evergreen |
15:26 |
|
vlewis joined #evergreen |
15:32 |
Bmagic |
jboyer-home: my current theory is the 9.1 -> 9.2 is the culprit - I am setting up another VM and this time I will leave it with 9.1 and upgrade to 2.6. Guess and check method |
15:50 |
bshum |
Hmm |
15:59 |
Bmagic |
bshum: I eliminated the old PG after the upgrade process |
16:00 |
Bmagic |
bshum: that could be part of the issue as well, because there are a ton of dependencies when removing 9.1.... and perhaps some needed by 9.2, so that is a little shakey to |
16:00 |
jboyer-home |
2 Minor data points: I finally got a 2.6 instance built and pointed at my 2.6 db which I’m certain hasn’t be reingested. It timed out 2-3 times (I assume no one has test db servers as nice as their prod machines) |
16:01 |
jboyer-home |
2nd data point: I’m going to squee like a schoolgirl about Ansible at the next Eg Intl conference. (Hint: a couple of hours ago there was no machine to even run 2.6 on) |
16:02 |
jboyer-home |
Re: searches: but it did eventually return results. (Had a concurrency error in my thought processes there) |
16:02 |
Bmagic |
jboyer-home: Are you saying that after the 3rd try, you got results? and yes, ansible is awesome |
16:02 |
jboyer-home |
3rd or 4th, yeah. |
16:03 |
Bmagic |
jboyer-home: See, I never got results, even after 15 tries... and waiting for the DB to stop processing... 20 minutes goes by, search again and nothing |
16:05 |
Bmagic |
bshum: I am now... some interesting things going on in there, but they finish after awhile... then I search the same search... no results |
16:06 |
Bmagic |
bshum: I do see my query getting to the DB, so I know for sure that my search is getting passed to my DB |
16:19 |
|
dconnor joined #evergreen |
16:22 |
Bmagic |
jboyer-home: bshum: tsbere: I am almost done upgrading this db to 2.6 from 2.4.1 - and then the search test again..... drumroll |
16:24 |
jboyer-home |
I am sick with jealousy. It takes close to overnight to go from 2.5.2 to 2.6.1 for us. (on an old antique server, but still) |
16:30 |
|
artunit joined #evergreen |
16:34 |
Bmagic |
jboyer-home: what is the results of du -sh /var/lib/postgresql/9.x/main for you? Ours is 61GB |
16:36 |
tsbere |
We are around 161GB as of Wednesday, dunno if it has gone up/down since then |
16:36 |
Bmagic |
1GB nics just dont cut it |
16:37 |
tsbere |
Our dump files, however, are only in the 8GB range. |
16:39 |
jboyer-home |
Our dumps take roughly 2-3 hours and are only 20-21GB (I don’t dump the auditor schema, we have 3 copies of the db on live hardware, the dumps are just for end of the world OH NOES and testing) |
16:39 |
jboyer-home |
It’s the test restoring that’s really rough. servers with 128GB of RAM take multiple days. :( |
16:43 |
Bmagic |
jboyer-home: I feel some of that pain over here. We are in a memory drought ourselves |
16:44 |
jboyer-home |
The real deals have 256GB, so it’s not as bad as it could be. |
16:45 |
Bmagic |
awwwwww, 30 minutes into 2.5.3 - 2.6.0 and this "Can't locate XML/LibXSLT.pm in" gotta install that and START AT THE BEGINNING |
16:45 |
jboyer-home |
Ouch. |
16:46 |
Bmagic |
But if that happened after 2 days, I would be really upset |
16:48 |
|
vlewis_ joined #evergreen |
16:48 |
jboyer-home |
I was curious and just checked, apparently a started a restore on our migration testing server last week. I know this because the COPY for asset.copy is still running with an xact_start of 6/13. D: |
16:51 |
|
ldw_ joined #evergreen |
16:54 |
jboyer-home |
Good luck, Bmagic! |
16:55 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
18:10 |
|
dbwells joined #evergreen |
18:28 |
|
mtcarlson_away joined #evergreen |
18:28 |
|
b_bonner joined #evergreen |
05:00 |
|
RAIDoperator joined #evergreen |
05:02 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
06:44 |
|
Callender joined #evergreen |
07:52 |
|
geoffsams joined #evergreen |
07:56 |
|
jboyer-home joined #evergreen |
12:43 |
jeff |
rsoulliere: have you tried with a small batch of records as input? |
12:44 |
Dyrcona |
yeah, I had tried to make that go away when it comes from standard input but it didn't work out. |
12:44 |
Dyrcona |
That message should have probably just been removed. |
12:46 |
rsoulliere |
Jeff: let me try a few tests on my end and I will report back. |
12:48 |
jeff |
fwiw, i don't think it will cause any issue, but i don't think marc_export requires that you pass your opensrf config file location if it's in the default location, and I don't think it knows about or respects a -c argument, just --config |
12:48 |
jeff |
i also don't think the previous version paid attention to a -c either, but i could be wrong there. |
12:48 |
jeff |
it's likely not causing any issues, just isn't doing anything. |
13:04 |
jcamins |
RoganH: yeah, I've done that. Not with passwords, because I never hit enter after typing in a password. |
13:05 |
RoganH |
I want one of those eye trackers that change your active window based on where you're looking. Tragically my library probably won't consider laziness an adequate disability to justify the accommodation. |
13:05 |
jcamins |
At least, I try not to. |
13:05 |
rsoulliere |
marc export testing update: I ran the marc_export script on a small list of ids and it worked as expected. Looking over the logs... one correlation is that if my text file had no records, the script imported all records. |
13:05 |
jcamins |
dbs: that's exactly why I don't hit enter after entering a password. |
13:06 |
dbs |
jcamins: thanks |
13:07 |
jcamins |
dbs: no judgement intended. |
13:26 |
Dyrcona |
--all isn't a default. |
13:26 |
Dyrcona |
If you give it no arguments or standard input, it sits there doing nothing. |
13:27 |
Dyrcona |
That's why it prints "waiting for input." |
13:27 |
dbs |
13:11 < rsoulliere> marc export testing update: I ran the marc_export script on a small list of ids and it worked as expected. Looking over the logs... one correlation is that if my text file had no records, the script imported all records. |
13:32 |
Dyrcona |
---all still isn't a default. It's because standard input was empty. |
13:32 |
Dyrcona |
Different bug. |
13:33 |
jeff |
i'm not sure that empty stdin would still export all -- but that's based on a quick read of the code earlier, so i'll defer to rsoulliere's empirical test results :-) |
13:33 |
Dyrcona |
The export code doesn't check Config::need_ids, so if the idlist is empty it looks like it will export everything. |
13:34 |
jeff |
aha |
13:35 |
hbrennan |
What does the Global checkbox mean/do in Circulation Limit Sets? |
13:43 |
dbs |
but I should have been explicit about that |
13:45 |
|
RoganH joined #evergreen |
13:54 |
|
RoganH joined #evergreen |
14:03 |
Dyrcona |
Well, it never got tested with an empty file or pipe, so same difference. |
14:49 |
|
sseng_ joined #evergreen |
14:49 |
|
ktomita joined #evergreen |
15:49 |
|
jeff joined #evergreen |
14:14 |
bshum |
gmcharlt: Springing this on you, but do you have any thoughts about 2.4 work? |
14:14 |
gmcharlt |
bshum: aiming for a release after ALA |
14:14 |
bshum |
#info gmcharlt aiming for an OpenSRF 2.4 release after ALA |
14:15 |
gmcharlt |
main thing I'd like at this point is more testing of the websocket work by berick |
14:15 |
bshum |
#info Get more testing of the websocket work started by berick, see https://bugs.launchpad.net/opensrf/+bug/1268619 and others |
14:15 |
pinesol_green |
Launchpad bug 1268619 in OpenSRF "WebSockets Gateway and JS Library" (affected: 1, heat: 6) [Undecided,New] |
14:16 |
bshum |
Sounds like a good thing. |
14:17 |
bshum |
Okay, we'll follow up on that after ALA, but in the meantime, folks should check out the bug and other announcements to help start testing the upcoming work for OpenSRF. |
14:18 |
bshum |
Thanks for the update gmcharlt++ |
14:18 |
bshum |
#topic Evergreen maintenance releases |
14:19 |
dbwells |
#info 2.5.5 and 2.6.1 are out |
14:19 |
bshum |
The next date on the calendar for that is 6/18, do we want to stick to that or perhaps shift things slightly? |
14:20 |
bshum |
dbwells++ # doing the release dance |
14:31 |
bshum |
I guess it depends on how comfortable you are separating the parts of the sprint into separate bugs |
14:31 |
bshum |
We could track the whole sprint as a blueprint and link it to each bug separately. |
14:31 |
bshum |
I guess it's more about how the final code will be presented. |
14:32 |
berick |
well, for the initial round of testing, i'm considering maintaining maybe a simple list somewhere. there will be /lots/ of little things. opening LP's for each would be cumbersome, imo. |
14:32 |
dbs |
#info dbs, Laurentian University |
14:32 |
berick |
after that's settled down, though, then we can leverage LP in the usual fashion |
14:33 |
bshum |
Sounds reasonable to me. |
14:35 |
dbs |
tpac-style |
14:35 |
bshum |
Next release meaning 2.7? |
14:35 |
RoganH |
I'm in favor of that. |
14:36 |
berick |
it will mean we really need to get in the websockets testing |
14:36 |
dbwells |
+1 to including as a preview |
14:36 |
bshum |
+1 to preview |
14:36 |
berick |
bshum: right, next meaning 2.7 |
15:01 |
kmlussier |
My concern is that this bug ultimately has big impacts on end users, and they aren't really seeing the nuances of the different approaches. |
15:01 |
RoganH |
kmlussier: I'm willing to read through it all. I agree it's important. I'm willing to do so and post about it but with the caveat that I might need correction if I miss a nuance. |
15:02 |
RoganH |
(Actually that seems likely that I will.) |
15:02 |
kmlussier |
For end users (not the devs), it might also be useful to have side-by-side screenshots of what happens. But, although I have done numerous screenshots in my last round of testing, I don't have much for what the original approach did. |
15:03 |
|
shadowspar joined #evergreen |
15:03 |
|
berick joined #evergreen |
15:03 |
|
JLDAGH joined #evergreen |
15:19 |
kmlussier |
I've been tied up in meetings all week and have just been catching up on this discussion. The rec_descriptor issue that bshum raised, where is that causing performance problems? I know reports was one of the areas bshum was poking at, but I'm confused as how it's related to holds. |
15:19 |
eeevil |
that's really just for reports, though. step 2 is use attr_flat instead |
15:20 |
jeff |
kmlussier: it can also slow down the hold permit checks at check when opportunistic hold capture takes place |
15:20 |
eeevil |
kmlussier: for cases where we have to test 100 holds and find that none will work for this copy, it's adding too much time to checkin. for normal cases where we only need to test a couple, it's not noticeable |
15:20 |
jeff |
kmlussier: given an item on a bib with many holds, but the item is not eligible for any of the holds, it can lead to timeouts at checkin |
15:21 |
bshum |
Well, it's noticeable all the time if you care about milliseconds (like our new library using a SIP sorter for checkins) does. |
15:21 |
kmlussier |
So this is when the retarget checkin modifier is being used? |
15:30 |
eeevil |
so, age protection and transit distance restriction are likely drivers. bshum, do you use either of those? |
15:31 |
eeevil |
kmlussier: not ... exactly |
15:31 |
eeevil |
kmlussier: but, parts of "restrictive rule" will contribute to potentially exposing the issue |
15:31 |
bshum |
eeevil: No to age protection, and as for transit distance, sort of, but I have to do some more investigation on what that's actually doing if anything. |
15:32 |
bshum |
eeevil: I think we have some rules set with transit distance true and a value of 2 or something in the matrix |
15:32 |
bshum |
For stuff that was intended to be local pickup only or something. |
15:32 |
bshum |
Not sure if those were written correctly, actually, now that I'm looking at it again. |
15:38 |
bshum |
eeevil: This isn't exactly apples, but I did a hold permit test on our old production DB hardware (pre-mvf upgrade scripts, etc.) vs. live and it was something like 12-18 ms vs. 400-800 ms |
15:38 |
bshum |
When I ripped out the rec_descriptor bits in live, that went down to like 24 ms or so |
15:38 |
bshum |
So I'm not sure transit range hurt me as much as that did |
15:39 |
eeevil |
bshum: well, it's not that |
15:39 |
eeevil |
I'm not saying that the problem isn't mrd |
15:40 |
eeevil |
what I'm saying is that it only really matters when you have a very long list of holds to test for |
15:40 |
eeevil |
and none of them pass |
15:40 |
eeevil |
if the first one passes, we look no further and capture |
15:40 |
eeevil |
but if we have to roll through all 100, then the difference matters |
15:41 |
eeevil |
so, if we create a mat-view for mrd, this will likely not be an issue |
15:41 |
eeevil |
in the case I just described |
15:41 |
eeevil |
but my point is that 99.9% of the time, you're not in that situation where there are 100+ unfillable holds |
15:42 |
eeevil |
which, again, is not to say that we shouldn't fix this ... we have several options |
15:42 |
eeevil |
but just to say that we're not seeing across-the-board 2.6 fail because most of the time it doesn't matter |
15:43 |
eeevil |
transit range is what can cause all 100+ holds to fail for a given (distant) copy. that's a trigger that can expose a given staff user to this behavior |
15:44 |
eeevil |
but transit range isn't the cause ... does that make sense? |
15:44 |
kmlussier |
eeevil: I think that 99.9% number is highly dependent on the Evergreen site. You're probably right when it comes to the networks I work with, but aren't there a fair number of Evergreen sites that rely more heavily on transit distance rules? |
15:46 |
eeevil |
kmlussier: the transit distance restriction is just how the door opens. it doesn't mean that they will suffer from this |
15:46 |
eeevil |
they still need tons of /distant/ holds that are at the front of the "queue" (hold sort order) |
15:49 |
eeevil |
dbwells: indeed, just so |
15:49 |
dbwells |
bshum: eeevil: I am getting better plans using the view in the paste above (as described by eeevil). |
15:50 |
eeevil |
bshum: if that view replacement doesn't go far enough, a mat-view based on that is the next step, and that's as fast as mrd can get (faster than since 1.0, when it was a table) |
15:50 |
dbwells |
I was testing with dkyle's original query, since it was a handy case for testing, so I'll be interested to see if it also helps bshum's holds case. |
15:55 |
* bshum |
live tests, because hey, that's just how we roll now |
15:55 |
kmlussier |
bshum++ |
15:55 |
kmlussier |
bshum: Better you than me. :) |
16:04 |
hbrennan |
bshum: It's more exciting that way |
16:12 |
hbrennan |
and I didn't break anything! |
16:12 |
gsams |
woo! |
16:13 |
hbrennan |
Since all checkout limits were previously being regulated by the penalties we removed, I had to create some new limit sets and circ policies for our different groups |
16:13 |
bshum |
dbwells: eeevil: Only a preliminary test, but I applied that new paste for the rec_descriptor and asked jventuro to run a test report using a fixed field data element. It ran successfully. |
16:13 |
hbrennan |
First time since Equinox set up our policies that anyone has touched them... I didn't even have permission to view them yesterday |
16:14 |
bshum |
She's going to test another one (the report that we know broke for sure) while I did back out the original find_hold_matrix_matchpoint function to retest my case. |
16:14 |
hbrennan |
so equinox++ too |
16:14 |
hbrennan |
since they were humored more than anything by our situation this morning |
16:15 |
gsams |
hbrennan: that was more or less what I was with mine. It wasn't the first time I had seen it myself anyway, I have bshum to thank for that one though. |
16:20 |
hbrennan |
I I struggled with printing a page of the policies, because it was cutting off #15 on the list |
16:20 |
hbrennan |
so I had to screenshot it |
16:21 |
kmlussier |
But there is filtering on those screens now. Makes them much easier to use. |
16:23 |
bshum |
dbwells: eeevil: Yes, the permit hold test is faster now with the new rec_descriptor view in place. Or at least not above 50 ms, so reasonable. |
16:23 |
eeevil |
cool ... simple CREATE OR REPLACE VIEW upgrade script, then |
16:23 |
eeevil |
dbwells++ |
16:23 |
eeevil |
bshum++ |
17:12 |
gsams |
which is a bit less than helpful in our case |
17:12 |
gsams |
I'm not really sure where this would be going wrong though |
17:18 |
|
berick joined #evergreen |
17:19 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
18:47 |
|
hbrennan joined #evergreen |
19:16 |
|
hbrennan joined #evergreen |
21:02 |
|
GtownJoe joined #evergreen |
01:17 |
|
bmills joined #evergreen |
04:09 |
|
remingtron_ joined #evergreen |
05:20 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
05:27 |
bshum |
Bleh, old report template running super long because it seems to rely on metabib.rec_descriptor for getting item type. That's annoying :( |
05:27 |
bshum |
View of view of view |
05:27 |
bshum |
Bleh |
13:05 |
|
hbrennan joined #evergreen |
13:11 |
|
kbeswick joined #evergreen |
13:21 |
eeevil |
bshum: a view that uses select-clause subqueries against record_attr_flat to simulate rec_descriptor would be an improvement ... we could skip the hstore step in that case |
13:23 |
bshum |
eeevil: Sounds logical. Fwiw crosstab is definitely super slow so far in my local testing so that seems a deadend. |
13:24 |
bshum |
eeevil: In reading your reply and other places rec_descriptor gets used, I'm suddenly wary of circ/hold matrix |
13:24 |
bshum |
With the lookup functions I mean. |
13:25 |
eeevil |
nah, it ends up /only/ doing the hstore<->record dance for the one record's data |
13:26 |
eeevil |
reports are a problem because a join is not necessarily (or even likely) going to use an index lookup on the record id column from mrd |
13:26 |
eeevil |
the circ/hold stuff does |
16:59 |
berick |
bshum: yep |
16:59 |
bshum |
The client was understandably impatient and gave me a network error while I waited for it. |
17:01 |
|
jwoodard joined #evergreen |
17:03 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
17:04 |
|
mmorgan left #evergreen |
17:05 |
|
mrpeters left #evergreen |
17:09 |
gsams |
trying to export marc records using the marc_export tool in 2.3.5 right now, our newest library seems to not get pulled with --library, just runs for 60 seconds and quits with nothing |
17:11 |
gsams |
is there something that I am missing to get marc_export to see the new library by its shortname? |
17:14 |
bshum |
gsams: What's the full command that you're attempting to use? |
17:17 |
|
ldwhalen_mobile joined #evergreen |
17:18 |
|
ningalls joined #evergreen |
17:24 |
bshum |
I think it's bombing out while running through the retarget permit test on all the holds. |
17:25 |
bshum |
None of the 122 unfilled holds are for the library checking it in, or item owning. |
17:25 |
bshum |
And the rules don't allow the item to be holdable by the other libs. |
17:25 |
bshum |
But because there's so many holds, it just kills the time |
17:26 |
bshum |
Sigh |
17:28 |
|
kmlussier joined #evergreen |
17:34 |
jeff |
i blame commit ae9641 |
17:34 |
pinesol_green |
[evergreen|Thomas Berezansky] Nearest Hold: Look at 100 instead of 10 holds - <http://git.evergreen-ils.org/?p=Evergreen.git;a=commit;h=ae9641c> |
17:34 |
jeff |
who came up with that idea and said it was alright, anyway? ;-) |
17:35 |
jeff |
`` Jeff Godin claims they have done this and it has produced no issues for them.'' |
17:35 |
gmcharlt |
jeff: you should have a serious chat with that fellow |
17:35 |
* jeff |
ducks |
17:36 |
jeff |
if checkin modifier "speedy" is set, consider (and test) only ten holds... otherwise, do 100. if checkin modifier "exhaustive" set, check all holds... ;-) |
17:36 |
bshum |
Heh |
17:37 |
bshum |
I'm wondering if my problems might be PG 9.3 related |
17:37 |
jeff |
bshum: in one case where i was looking at things (and i think i was talking out loud here also) i found that the hold permit tests seemed to be running twice at checkin. no idea if that's been that way for a while, or if it's avoidable. |
17:37 |
jeff |
you might pull on that thread a little if you're digging into this, though. |
17:38 |
bshum |
2542c713ea4a0d686d7b7ceae352929b60a80806 since it mentions PG 9.3 and hold matrix functions |
17:38 |
pinesol_green |
[evergreen|Mike Rylander] LP#1277731: Disambiguate parameter/column names - <http://git.evergreen-ils.org/?p=Evergreen.git;a=commit;h=2542c71> |
17:38 |
bshum |
But really, I think I'm just running on fumes now :) |
17:41 |
dbwells |
gsams: One thing to consider is that --library specifies the owning_lib, not the circ_lib. Not sure if that is a factor in your scenario. |
17:42 |
gsams |
dbwells: that is actually what I want, I plan to use marc_export as a temporary measure for Boopsie uploads for this library |
17:44 |
dbwells |
gsams: if you search asset.call_number, do you find rows where the owning_lib matches the id for NRH from actor.org_unit? |
17:47 |
bshum |
So |
17:47 |
bshum |
A retarget test |
17:47 |
bshum |
Time pre-upgrade DB: 13 ms. Time on production: 500+ ms |
17:49 |
dbwells |
gsams: Also, if it runs exactly for 60 seconds, that smells like a timeout issue. I'd consider checking what query is running in the DB when the script exits and go from there. |
18:06 |
gsams |
dbwells: asset.call_number returns 162k rows for that owning lib |
18:15 |
dbwells |
gsams: I'm heading out in just a few minutes, but if you want to explore the timeout angle, you could try passing a second argument to the $recids json_query in marc_export. Something like '{timeout => 600, substream => 1}' (that might not be exactly right, and I can't recall if 'substream' is necessary, but something like that). |
01:03 |
|
artunit_ joined #evergreen |
03:56 |
|
b_bonner joined #evergreen |
03:56 |
|
mtcarlson_away joined #evergreen |
05:04 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
06:33 |
|
Callender joined #evergreen |
07:23 |
|
artunit joined #evergreen |
07:44 |
|
akilsdonk joined #evergreen |
15:01 |
bshum |
eeevil: Hmm, the uncontrolled table has nothing in there either |
15:02 |
bshum |
metabib.uncontrolled_record_attr_value |
15:02 |
bshum |
Assuming that's the one you mean |
15:03 |
jeff |
jboyer-isl: i was about to ask you something starting with "in your testing", but then i saw your comment. :-) |
15:03 |
bshum |
eeevil: Hmm, might it be that because "multi" is still set to TRUE for that definition, that's a potential issue? |
15:05 |
jeff |
jboyer-isl: i like your approach of "don't let these API calls transfer captured holds at all" |
15:05 |
jeff |
jboyer-isl: it gets around the issue of inconsistent ahcm when opting not to retarget (but still transferring) the captured holds |
15:17 |
* jeff |
eyes OpenILS::Application::Cat::Merge::merge_records as potentially dead code |
15:18 |
jeff |
jboyer-isl: what methods have you used to clean up after this -- as you described it -- "havoc"? :-) |
15:20 |
jboyer-isl |
Since we don't know about it until after it's happened, usually the staff have already either checked the item out to someone or placed a new hold. We haven't yet gone to great lengths to correct anything. I think it was only reported a couple of times between our figuring out what was going on and my applying that basic patch. |
15:24 |
jeff |
neither old-school merge nor in-db merge seem to have paid any attention to title holds (though i didn't go digging too far back to confirm on the old-school method) |
15:30 |
jeff |
jboyer-isl: and the staff client appears to be the only thing calling open-ils.circ.hold.change_title[.specific_holds], and i don't see any references to OpenILS::Application::Circ::Holds::change_hold_title[_for_specific_holds] so i think that answering the question of "what's the impact when these holds are left on the old bib but their copies are moved" and "add a join to skip copies with in-transit holds also" followed by some testing should help bring |
15:38 |
|
rjackson-isl joined #evergreen |
15:38 |
|
jboyer-isl joined #evergreen |
15:39 |
jeff |
I can see two scenarios where Transfer Title Holds can be used. In one, there are no planned changes to copies or bibs, we're just moving the hold or holds. In the other, we've transferred (possibly as part of a merge) or are about to transfer copies to another bib. |
15:40 |
bshum |
Doh, array_to_string(array_agg()) crept in again over string_agg() |
15:40 |
bshum |
Just saw it part of function metabib.reingest_record_attributes() |
15:42 |
jeff |
In the second, I don't know that we care if the already-captured holds stay behind on the old title record unless a) it will break fulfillment / transit completion -> available -> fulfillment or b) the hold gets out of its captured status for some reason, in which case it'll likely end up being an unfillable hold |
16:11 |
|
Callender joined #evergreen |
16:12 |
eeevil |
kmlussier / bshum: vandelay can match against random tag data without record attrs being involved (see: isbn), so ISTM there should be a way without involving the existing attr defs |
16:13 |
RoganH |
bshum: short email sent to dev list. We'll see where the discussion goes. :) |
16:13 |
kmlussier |
eeevil: The record attrs was what was recommended to me by an ESI developer back when we were testing the Vandelay development because the standard Vandelay matchpoints required that a subfield be used. |
16:15 |
eeevil |
kmlussier: gotcha. this is a solvable issue, probably very simply |
16:17 |
eeevil |
bshum: looking at the code, record_attr_flat /does/ include uncontrolled values |
16:17 |
eeevil |
so, I was wrong up there -^ |
13:28 |
Dyrcona |
Anything added in 2013 or later is fine. |
13:29 |
Dyrcona |
Now, I might have done something in the database to cause it. I can't rule that out. |
13:32 |
Dyrcona |
None of my other book bags show duplicates, but they also don't have anything added in October of 2012, just September and November. |
13:33 |
jeff |
Dyrcona: I was able to add duplicate items to a list just now in our 2.5.1 system running lightly-ported 2.1 era templates |
13:34 |
jeff |
so if it was something done post-2.5.1, or something done at the template level, i obviously don't have that in place. |
13:35 |
jeff |
but other than that it was pretty easy to add three items to a temp list, then go to my lists and add them to a new list, then go back and add two of the three items to the temp list and then add those to the previously-created list. |
13:35 |
jeff |
don't have running master handy to test with. |
13:37 |
Dyrcona |
I can't find any Launchpad bugs. |
13:38 |
hbrennan |
Nor I. |
13:40 |
Dyrcona |
I am going to try to come up with a query to see how widespread this is. If it only affects my one list, then I'll get one with things and ignore it. |
14:21 |
Dyrcona |
Well, it does not appear to be confined to a single point in time, so it is something that is ongoing for us at least. |
14:24 |
|
kbeswick joined #evergreen |
14:25 |
Dyrcona |
Well, that's not good. |
14:25 |
Dyrcona |
Trying to test a hypothesis about this being related to temporary lists, I get an internal server error on my dev machine. |
14:27 |
hbrennan |
I did see a bug report on internal server error on lists, when changing name... |
14:27 |
kmlussier |
I think I just saw Dyrcona fall down a rabbit hole. |
14:28 |
Dyrcona |
heh. I've been that hole since about 11:30 am. :) |
16:52 |
|
kmlussier left #evergreen |
16:58 |
|
afterl left #evergreen |
17:19 |
|
mmorgan left #evergreen |
17:21 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
17:35 |
jeffdavis |
In 2.6, if I do a search with "Group Formats & Editions" checked, then click "Place Hold" on a metarecord (a result with a count in parentheses after the title), it should attempt to place a metarecord hold, correct? |
17:36 |
jeffdavis |
I feel like this is a dumb question, but on our test servers, the hold_type param in the Place Hold URL is T instead of M. |
18:26 |
bshum |
jeffdavis: I just tested it on one of our test servers and mine came up with M and not T when clicking place hold for a metarecord |
18:27 |
bshum |
Not sure why yours would say "T" |
18:35 |
jeffdavis |
Neither am I. :S Thanks for checking. |
18:49 |
|
ericar joined #evergreen |
19:11 |
jeffdavis |
Ah, looks like a bug introduced when merging our TPAC customizations into 2.6, phew. |
01:33 |
|
eeevil joined #evergreen |
01:33 |
|
phasefx2 joined #evergreen |
03:11 |
|
dcook joined #evergreen |
04:51 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
06:17 |
|
artunit_ joined #evergreen |
06:53 |
|
mrpeters joined #evergreen |
07:19 |
|
edoceo joined #evergreen |
09:43 |
bshum |
I haven't heard of anything like that, and we use 7 64-bit all the time |
09:43 |
csharp |
okay - good to know |
09:44 |
bshum |
Well, "we" being most of our libraries / all of the rest of Biblio staff |
09:44 |
csharp |
well I only have access to 32-bit Windows 7 right now, so that rules out the main thing I was going to test |
09:44 |
|
kbeswick joined #evergreen |
09:45 |
csharp |
I assume it's one of those bugs that is elicited by a specific workflow |
09:54 |
* csharp |
daydreams about the beach he'll be sitting on tomorrow |
09:58 |
jeff |
Dyrcona: tt filter plugin for ncip work, or something else? |
09:59 |
Dyrcona |
jeff: For Evergren, actually. |
09:59 |
Dyrcona |
Evergre[e]n, even. :) |
11:23 |
Dyrcona |
And, Nautilus decides to take a nap. |
11:25 |
Dyrcona |
yboston: Dunno for sure, but don't think you're missing anything. |
11:26 |
berick |
yboston: what interace? |
11:26 |
yboston |
berick: my apologies, the are using the the GUI batch import |
11:26 |
yboston |
on the client |
11:27 |
yboston |
Dyrcona: thanks, I was afraid I was missing something obvious. I am currently playing around with Supercat URLs to see recently imported bibs, but I am not seeing DB ids listed so far in my tests |
11:28 |
yboston |
for the record, the catalogers import one or two bib records, and then want to immediatly start editing those records. having the bib id would allow them to jump to that record. |
11:29 |
yboston |
Right now they immediatly try to search for the record, but I beleive there is a delay for the records to be compeltly ingested, so they don't get a match to the just imported record |
11:29 |
yboston |
*immediately |
11:30 |
jeff |
or they searched for it in the catalog, then imported, then search again and their search results from the first time are still cached, etc. |
11:31 |
jeff |
i thought that there was a way to get to the marc editor from the import queue view, but i could be wrong. one workaround would be to append negative nonsense to their search to force it to not use the cached search results. |
11:32 |
jeff |
i.e., if you searched for Example Terms, you could search for Example Terms -sdhkjhgrh |
11:35 |
yboston |
jeff: thanks for alerting me about the search caching behavior, good to know, I knew there was some caching, but not any details about it |
11:37 |
yboston |
jeef (or someone else): how long are searches / search results typically cached? (just out of curiosity) |
11:40 |
berick |
yboston: default is 5 minutes. it's configurable in opensrf.xml, though |
11:40 |
yboston |
berick: thanks for all the info. |
11:41 |
yboston |
I guess I can put in a wishlist item for this, I am just glad I was not missing something obvious |
11:43 |
yboston |
I wonder what other catalogers do to work around this, though I am sure others have very different workflows or it takes them more than 5 minutes to search for the recently imported record |
11:45 |
yboston |
btw, I am trying to get the BD ids by testing out supercat URLs like this one http://catalog.berklee.edu/opac/extras/feed/freshmeat/htmlholdings/biblio/import/200/ |
11:45 |
yboston |
but so far I am not getting the db id, can anyone recommend other supercat URLs (or something similar) to try to stumble on the DB ids? |
11:46 |
yboston |
thnaks in advance |
11:54 |
pastebot |
"berick" at 64.57.241.14 pasted "for yboston -- show imported record ID in Vandelay queue" (20 lines) at http://paste.evergreen-ils.org/59 |
11:57 |
berick |
e.g. https://bill-dev2.esilibrary.com/eg/vandelay/vandelay?qid=11&qtype=bib (only works in FF for now -- still importing -- jump to page 100) |
11:59 |
|
kmlussier left #evergreen |
15:05 |
hbrennan |
thank you thank you for plugging that into git |
15:05 |
hbrennan |
If this gets fixed our library will rejoice |
15:06 |
hbrennan |
I will help in any way possible |
15:06 |
bshum |
I'm not a fan of how the links are generated in general. I'd kind of prefer if they were more combined in some way, having two links to the same title seems pointless to me (with the 490 and 800 on the same bib) |
15:07 |
bshum |
Also, I'm not sure why the hanging punctuation doesn't go away, I thought I saw stuff that seemed to be replacing it, but I have to read it more closely. |
15:07 |
bshum |
The hanging ; at the end of the entry on the test record on spork1 annoys me :) |
15:07 |
Bmagic |
bshum: perhaps the limit sets need to be "global" let me try that |
15:09 |
bshum |
Bmagic: I'd be curious if maybe changing it to use fallthrough = TRUE would alter how it behaves. What that would mean is that if rule 261 is linked to any other rule events, or used in concert with other rules, it might also extend the limit set to those other rule interactions. |
15:09 |
hbrennan |
It's definitely not very friendly |
15:11 |
jeff |
while reserving the right to revise my opinion, i'd say show the 490 where ind1=0, because that indicates that there will not be a corresponding 830 |
15:13 |
hbrennan |
so if there isn't any 800 info, show a link for the 490? |
15:13 |
hbrennan |
I'm rusty on cataloging |
15:13 |
bshum |
kmlussier: Do you remember offhand any graphic_880 test bibs in the concerto set that link to series? I want to see how it's presented... |
15:13 |
* bshum |
needs more google powers |
15:13 |
jeff |
I don't recall offhand what series-related adjustments we've made to our templates (or haven't backported), but http://catalog.tadl.org/eg/opac/record/46650237?locg=22 is an example of a record with a 490 with ind1=0 and no 830. I'd like the information (book 3 of The Hardy boys mystery series) to be displayed somewhere. |
15:14 |
bshum |
jeff: Yeah, that's part of what I'm concerned with monkeying with the links. I think we should revise how the links work but also preserve displaying the full series statements. |
15:15 |
* kmlussier |
looks. |
16:26 |
pinesol_green |
jboyer-home: Down time is a fact of business when you're a poor 501c3 corporation. |
16:27 |
|
ldw joined #evergreen |
16:27 |
Dyrcona |
jboyer-home++ |
16:28 |
Dyrcona |
Mabye I should test with a patron who has less than 4,196 circs in their history. |
16:29 |
bshum |
kmlussier: Oh, uh... your question from like hours ago. Whenever the next week starts is probably fine for the next dev meeting. |
16:29 |
* kmlussier |
had already forgotten she asked the question. :) |
16:29 |
* bshum |
doesn't have a strong opinion right now |
17:03 |
Dyrcona |
Well, going home for the day. |
17:03 |
Dyrcona |
Oddly enough, it feels like I just got here. |
17:05 |
|
mmorgan left #evergreen |
17:24 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
17:25 |
Bmagic |
bshum: to your knowledge, in order to allow for example 5 items of 3 different circ mods: dvd,book,audio for a total of 15, can I just associate 3 different limit sets to the same circ policy, or do I need 3 circ policies with a single limit set each? |
17:26 |
bshum |
Bmagic: Err, 5 of each sounds like 3 different limit sets to me. |
17:27 |
bshum |
Cause you wouldn't allow someone to get 10 of dvd and 5 of book to make up 15? |
19:57 |
hbrennan |
bshum: I'm curious what library "spork1" is.... That's the library with the same record info, but they've managed to hide the series author info from the link, so the link actually works. Perhaps there is a quick fix for now. |
19:59 |
hbrennan |
You can send me a later or I'll just find you next week. I forgot I'm out of here early today |
19:59 |
hbrennan |
Thanks! |
19:59 |
bshum |
hbrennan: that's my test server |
19:59 |
hbrennan |
ohhh |
19:59 |
bshum |
It's not a live system |
19:59 |
hbrennan |
So is that something you just did today? |
12:26 |
bshum |
Or how the command was issued |
12:26 |
mrpeters |
maybe |
12:27 |
Dyrcona |
mrpeters: try it at night when no one should be cataloging. |
12:27 |
mrpeters |
Dyrcona: its a test db |
12:28 |
Dyrcona |
Ah. i saw production above. |
12:28 |
mrpeters |
what do i need to do to clean up? |
12:28 |
mrpeters |
i have no pending at.events |
14:49 |
pinesol_green |
kmlussier: Have you tried turning it off and back on again? |
14:49 |
pinesol_green |
kmlussier: I am only a bot, please don't think I'm intelligent :) |
14:50 |
hbrennan |
haha |
14:50 |
jeff |
$logger->info('some text') isn't logging where i would expect. that's unusual. |
14:51 |
jeff |
test/dev system, might just be configured wrong. |
14:52 |
dbwells |
bshum: P.S. if you (or someone else with editing privileges) could run that last ~no, it would fix the quote issue mrpeters pointed out earlier. Whether we use '=' or 'TO' doesn't seem to matter, but feel free to change that as well. |
14:52 |
bshum |
Oh sure. |
14:53 |
bshum |
~no search_path is <reply> After restoring a database, make sure to reset the search_path accordingly with something like: ALTER DATABASE unpredicable_haxxors_go_away SET search_path = 'evergreen, public, pg_catalog'; |
14:53 |
pinesol_green |
I'll remember that bshum |
14:56 |
yboston |
heads up, DIG will have its (rescheduled) monthly meeting at 3 PM EST ( a few minutes from now) |
14:58 |
bshum |
kmlussier: kbutler: hbrennan: Hmm, with testing, I can't seem to create the problem I thought existed with the password and phone thing. Nevermind, guess it does work the way I thought it should... |
14:58 |
hbrennan |
So it only changes during the first entering of phone? |
14:59 |
bshum |
Yeah, it seems to only apply it during new registrations so far in my testing. |
14:59 |
kbutler |
hmm |
14:59 |
bshum |
Changing it after, and then updating the password doesn't change it |
14:59 |
bshum |
At least on my test users |
14:59 |
bshum |
Err, changing the phone afterwards doesn't change the password from what I changed it to.... |
14:59 |
* bshum |
can't explain anything today apparently |
15:00 |
jeff |
ah. logging was not the issue. |
15:00 |
hbrennan |
On a related note, I noticed before that the password boxes weren't talking to each other.. so it didn't matter if they matched.. has that bug been fixed? |
15:00 |
bshum |
o.O |
16:15 |
yboston |
remingtron: BTW, the error with Asciidoc to HTML is that asciidoc get confused about image paths because we are using the "include:file_name.txt" directive, in terms of resloving paths |
16:21 |
|
tspindler left #evergreen |
16:27 |
remingtron |
yboston: glad you're making progress. Is there a particular reason you want to build it all locally? |
16:29 |
yboston |
remingtron: So I can test major changes to the docs, to see if I am breaking the converson |
16:29 |
yboston |
*converson |
16:29 |
yboston |
remingtron: without waiting until midnight for the docs server to proces the content |
16:30 |
yboston |
remingtron: also, to fostre redundancy in case the server goes down |
16:38 |
remingtron |
yboston: right, nice to not have to wait until midnight to see what breaks. :) |
16:38 |
yboston |
remingtron: I'll keep you posted on my progress |
16:38 |
remingtron |
great, thanks |
17:08 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
17:08 |
|
mmorgan left #evergreen |
19:55 |
bshum |
gmcharlt: Just read https://bugs.launchpad.net/evergreen/+bug/1326983, sounds straightforward, but I have a question about it. the .example file doesn't overwrite an existing file if setup on the system, right? So we need to inform folks about the bug fix (if they don't actually read the changelogs). Release notes aren't really done for point releases, but maybe there's some other method we can think of. |
19:55 |
pinesol_green |
Launchpad bug 1326983 in Evergreen 2.5 "stock A/T filter for hold_request.shelf_expires_soon hook is too broad" (affected: 1, heat: 6) [Low,New] |