Time |
Nick |
Message |
03:13 |
|
schatz joined #evergreen |
04:37 |
|
Stompro joined #evergreen |
04:37 |
|
dbs joined #evergreen |
04:37 |
|
mnsri joined #evergreen |
07:43 |
|
jboyer-isl joined #evergreen |
07:47 |
|
ericar joined #evergreen |
07:54 |
|
rjackson_isl joined #evergreen |
08:35 |
|
Dyrcona joined #evergreen |
08:35 |
|
mrpeters joined #evergreen |
08:47 |
|
Shae joined #evergreen |
09:01 |
|
mmorgan joined #evergreen |
09:25 |
|
yboston joined #evergreen |
09:31 |
|
collum joined #evergreen |
09:51 |
|
ericar_ joined #evergreen |
09:55 |
|
krvmga joined #evergreen |
09:57 |
krvmga |
our catalog URL is http://catalog.cwmars.org . i don't understand why entering this string in the basic search box doesn't find the item (title:^The daughters$ && author:^Celt, Adrienne$) |
09:57 |
krvmga |
at a glance, i don't see anything wrong with the string. |
09:59 |
krvmga |
"The daughters" and "Celt, Adrienne" are in the MARC record |
09:59 |
krvmga |
but there are two things i notice |
10:00 |
krvmga |
quoting what's in the 245 "The daughters :" |
10:00 |
krvmga |
quoting what's in the 100 "Celt, Adrienne." |
10:00 |
kmlussier |
krvmga: You need to include that ending punctuation for anchored searches to work. |
10:01 |
krvmga |
the colon and the period. are those messing up the search? |
10:01 |
kmlussier |
krvmga: yes |
10:01 |
krvmga |
voila! |
10:01 |
krvmga |
sometimes, you know, just saying these things out loud in irc is SO helpful. |
10:01 |
krvmga |
kmlussier: thx. |
10:03 |
krvmga |
hmmm... |
10:04 |
krvmga |
i added in the punctuation and the search still didn't find the item |
10:04 |
krvmga |
(title:^The daughters :$ && author:^Celt, Adrienne.$) |
10:05 |
krvmga |
i also tried various permutations - Adrienne with no period but "daughters" with colon, etc. |
10:05 |
kmlussier |
krvmga: Check your metabib index entries just to make sure that's how it's being indexed. It needs to exactly match what's in the metabib entries. |
10:05 |
krvmga |
i will look at that now |
10:06 |
krvmga |
kmlussier: check for this exact record, you mean? |
10:06 |
krvmga |
or check config. metabib? |
10:07 |
kmlussier |
Check in metabib.title_field_entry to see how the title is being indexed for that particular record and do the same for metabib.author_field_entry. |
10:08 |
krvmga |
omw to check |
10:10 |
kmlussier |
Looking at the record, I suspect the search would need to be for ^The daughters : a novel$ |
10:11 |
kmlussier |
Possibly with a / at the end of it. But it all depends on what you see in the metabib entries. |
10:11 |
Dyrcona |
krvmga: I suggested not using both anchors yesterday, and I meant it. |
10:13 |
krvmga |
Dyrcona: LOL. And I take your words to heart. Unfortunately, EBSCO doesn't seem to have the same attitude. A link to this item from Novelist Plus is generating that search string and I need to figure out whether to make it an EBSCO problem or a CWMARS problem. |
10:13 |
bshum |
"I concur, sir, the message is authentic." |
10:13 |
Dyrcona |
It's an EBSCO problem. |
10:13 |
bshum |
krvmga: Oh that's definitely an EBSCO problem. Why even ask? |
10:14 |
krvmga |
bshum: lol, stahp! you're killing me. |
10:14 |
* krvmga |
suspects bshum has been binging on submarine movies lately. |
10:15 |
kmlussier |
I suspect the search string may be different for MVLC and NOBLE since they would run across the same issues. |
10:15 |
krvmga |
Dyrcona: what happens in MVLC if you go into Novelist Plus and click on a "See this in X library" link? |
10:18 |
bshum |
krvmga: Maybe... but also, I quote stuff like this all the time with my friends. I just don't always find ways of doing it in my regular day to day. So I snatch them up as I can. :) |
10:18 |
krvmga |
bshum: :) |
10:19 |
Dyrcona |
krvmga: To be honest, I don't even know how to do that. I just set the stuff up in the back end. I don't actually use it. |
10:21 |
* bshum |
doesn't know what "Novelist Plus" is, but assumes it's different from "Novelist Select"? |
10:24 |
krvmga |
bshum: yes, novelist select is the catalog enhancement |
10:24 |
Dyrcona |
krvmga: James might know better than I if we've had problems reported with Novelist Plus. You could shoot him an email. |
10:25 |
krvmga |
Dyrcona: thx. |
10:26 |
* bshum |
doesn't have any way of testing it then. |
10:26 |
bshum |
But presumably, it's an EBSCO problem and they can adjust the URL syntax appropriately :) |
10:27 |
bshum |
We went some rounds with them on our Novelist links back when we transitioned from JSPAC to TPAC too. |
10:28 |
* bshum |
goes back to thinking about good movie quotes |
10:29 |
bshum |
(and otherwise working) |
10:35 |
|
Christineb joined #evergreen |
10:57 |
|
ericar_ joined #evergreen |
11:01 |
|
ericar_ joined #evergreen |
11:07 |
yboston |
Good morning, I have a questions about how folks handle the Group Penalty Threshold of patron_exceeds_checkout_count when you have a couple of branches with their own values. Lets take a simplified, if I have three branches (B1 & B2 & B3) that each are set up with patron_exceeds_checkout_count = 10. |
11:07 |
yboston |
Do you then end up setting up a patron_exceeds_checkout_count at the system or consortium level of patron_exceeds_checkout_count = 30? I am just wondering if I should be putting a sort of global cap of maximum items out at the consortium or system level that adds up the maximum number of all the branches. |
11:07 |
yboston |
I was thinking if I have all branches set up with the proper patron_exceeds_checkout_count, and branches are the only ones that own items, then I may not need a limit at the system or consortium/system level. Thanks in advance |
11:11 |
mmorgan |
krvmga: FWIW, here is NOBLE's query string for our Ebsco custom link: bool=and&qtype=author&contains=contains&query={author}&bool=and&qtype=title&contains=contains&query="{title}"&bool=and&qtype=author&contains=contains&query=&_adv=1&locg=1&pubdate=is&date1=&date2=&sort=pubdate.descending |
11:14 |
kmlussier |
mmorgan: Just a minute too late! :) |
11:14 |
kmlussier |
I'll send him an email. |
11:15 |
mmorgan |
:-( ...just got out of a meeting. |
11:20 |
Dyrcona |
yboston: It should be aware of the hierarchy when looking up penalties, so if you set it at all the branches, then you don't need it on the system or consortium levels. |
11:20 |
bshum |
yboston: For our consortium, we mostly avoided use of the group penalty threshold for max circ items |
11:21 |
bshum |
yboston: Instead we focused on use of circ limit sets to restrict the number of materials allowed per circulation rules |
11:21 |
yboston |
Dyrcona: thanks for the feedback, wanted to make sure my thinking made sense |
11:22 |
Dyrcona |
It's not a bad idea to set it up for the whole consortium just in case. |
11:22 |
yboston |
bshum: thanks for the feedback, for our library in Spain I will probably swithc to using circ limits sets sicne it is set up differntly than the other branches |
11:23 |
yboston |
Dyrcona: OK |
11:23 |
yboston |
Dyrcona: I agree, |
11:23 |
bshum |
I think the main difference is that our libraries didn't want an actual hard block on patron activities (which is usually what ends up happening when a penalty is reached), vs. the limit set, which just restricted circulation beyond a certain point. |
11:24 |
bshum |
Though one can alter the penalty block parameters |
11:25 |
yboston |
bshum: I see your point, thanks |
11:25 |
bshum |
Also, with a exceeds checkout stop, it's indiscriminate on type of material or varying locations. |
11:25 |
bshum |
So 10 of anything, and boom, you're blocked. Doesn't matter if it's 5 from library A, and 5 from B. |
11:25 |
bshum |
Or 5 books + 5 dvds, etc. |
11:25 |
bshum |
Limit sets were just more magically delicious. And insane. |
11:25 |
bshum |
So obviously we picked that. |
11:26 |
yboston |
bshum: thanks for all your examples, it adds to my understanding of the pros and cons |
11:26 |
bshum |
yboston: Good luck :) |
11:30 |
|
bmills joined #evergreen |
11:34 |
|
buzzy joined #evergreen |
11:34 |
yboston |
bshum: thanks |
11:51 |
|
jihpringle joined #evergreen |
12:36 |
dbs |
yay, successfully hacked around acquisition's determination to default to Date().getFullYear().toString() as the default year despite our fiscal year that starts in July :) |
12:47 |
dbs |
I thought it was a bit weird that it was still defaulting to 2015 even though we do have acq.fiscal_year populated at the consortial level, but apparently that doesn't come into play in the fund grid stuff at all. |
12:49 |
kmlussier |
dbs: There are some bug reports related to that, I think. |
12:50 |
kmlussier |
Our fiscal year also begins in July. |
12:53 |
dbs |
YAOUS? |
12:53 |
dbwells |
Our fiscal year also starts in July. In the lamest of workarounds, I think we just rename our fiscal year halfway through the year to make it work :) |
12:54 |
csharp |
ick |
12:54 |
dbs |
hah |
12:54 |
csharp |
we also need a UI to configure FYs, but I assume that's on the Big Board™ for acq inside webby |
12:56 |
dbs |
my workaround was to hardcode '2016' into the fund selector instead of " || Date()...", and then turn the loadFundGrid(... || Date()...) call into loadFundGrid(... || null) to rely on loadFundGrid's default |
12:58 |
kmlussier |
Since acq is primarily Dojo interfaces and will most likely be moved over as is, I didn't think there was a Big Board for acq inside webby. |
12:58 |
kmlussier |
Unless somebody is looking at Angularizing the acq interfaces. That would be nice! |
13:03 |
csharp |
yeah |
13:23 |
|
kitteh_ joined #evergreen |
13:27 |
|
RoganH joined #evergreen |
14:35 |
|
rlefaive joined #evergreen |
15:02 |
Dyrcona |
@weather 01845 |
15:02 |
pinesol_green |
Dyrcona: The current temperature in WB1CHU, Lawrence, Massachusetts is 79.0°F (3:02 PM EDT on August 04, 2015). Conditions: Light Thunderstorms and Rain. Humidity: 96%. Dew Point: 77.0°F. Pressure: 29.80 in 1009 hPa (Rising). Severe Thunderstorm Watch 469 in effect until 8 PM EDT this evening... |
15:05 |
csharp |
okay, I've thought for years now that reports were going to be the death of me... I was wrong - it's acq! |
15:05 |
csharp |
@blame acq |
15:05 |
pinesol_green |
csharp: acq 's bugfix broke csharp's feature! |
15:05 |
csharp |
pinesol_green: damn skippy |
15:05 |
pinesol_green |
csharp: Go away, or I'll replace you with a very small shell script! |
15:05 |
Dyrcona |
hah |
15:05 |
|
jlitrell joined #evergreen |
15:05 |
Dyrcona |
acq is very picky. |
15:06 |
kmlussier |
csharp: What's the problem? |
15:06 |
csharp |
kmlussier: it's mostly the formula of software + actual living people ;-) |
15:07 |
csharp |
basically, supporting acq is like supporting a whole new software suite |
15:07 |
RoganH |
I thought living people were the bugs in the grand design. |
15:08 |
csharp |
https://www.youtube.com/watch?v=eVSlE28hOgI |
15:09 |
csharp |
kmlussier: we're onboarding three libraries at once, and we're learning what kinds of mistakes people are going to be making in setup |
15:10 |
csharp |
we have an acq-specific sandbox server for the libraries to learn on, but now that we're in production with one of them, we're seeing the rubber meet the road |
15:10 |
kmlussier |
csharp: Yeah, I can see how the setup process can be difficult, especially when bringing several on at one time. I've seen a bit of that. |
15:11 |
Dyrcona |
csharp: I've heard that some consortia don't allow library staff anywhere near the acq setup and do it for them. |
15:11 |
csharp |
between permissions, org_unit hierarchies, and dojo quirks, I have my hands full |
15:12 |
csharp |
Dyrcona: yeah, we're identifying those pieces we don't want them to touch |
15:12 |
Dyrcona |
Freedom Zero isn't just a philosophical stance. It is what actually happens in practice. |
15:13 |
csharp |
anything that EDI depends on for instance ;-) |
15:25 |
Dyrcona |
We let the library staff set that up and then help them fix whatever mistakes happen. |
15:31 |
|
schatz joined #evergreen |
15:32 |
jonadab |
Does anyone know which documentation I should be reading to find the fastest way to turn a .MRC file into something Evergreen can import? |
15:33 |
mrpeters |
jonadab: is it MARC21 binary? |
15:34 |
jonadab |
It is clearly binary, not text based primarily. |
15:34 |
mrpeters |
have you tried using yaz-marcdump to view it? |
15:34 |
jonadab |
I have not. I will look into that. |
15:35 |
mrpeters |
something like so: yaz-marcdump -f MARC-8 -t UTF-8 -o marcxml yourfile.MRC >> converted.marc.xml |
15:36 |
|
remingtron joined #evergreen |
15:36 |
* jonadab |
installs yaz |
15:36 |
mrpeters |
you also want to add <?xml version="1.0" encoding="UTF-8" ?> as the first line in that converted.marc.xml file |
15:36 |
|
remingtron joined #evergreen |
15:36 |
mrpeters |
but to view them, yaz-marcdump yourfile.MRC | less should do the trick |
15:37 |
dbwells |
jonadab: you might find this page helpful as well. This particular solution does the xml conversion step with pymarc. http://en.flossmanuals.net/evergreen-in-action/sending-gentle-reminders-to-your-users-setting-up-notifications-and-triggers/ |
15:37 |
mrpeters |
but, your goal is to get them into marc.xml and then use marc2bre.pl to transform them into data for the biblio.record_entry table |
15:38 |
mrpeters |
and then parallel_pg_loader.pl with your .bre file as the input will push out sql files which will populate the database with your converted marc records |
15:38 |
kmlussier |
Are we still planning to have a dev meeting tomorrow? |
15:38 |
mrpeters |
i have a great bash script that gmcharlt helped me make many years ago that i still find incredibly useful |
15:38 |
jonadab |
mrpeters: Oh, that looks highly relevant. I'm getting XML output, yeah. |
15:38 |
dbwells |
that link name I posted seems off, huh... It goes to the right page for me, though. |
15:39 |
mrpeters |
jonadab: i'd be happy to help get you started |
15:39 |
jonadab |
dbwells: I get a page about migrating from a legacy system, which seems relevant, yeah. |
15:39 |
jonadab |
I will look at that too, thanks. |
15:40 |
jonadab |
mrpeters: I'm going to have a look at what you two have already shown me first, I'll be back if I have more questions :-) |
15:41 |
dbwells |
jonadab: yeah, three headings in is "Migrating your bibliographic records", which assumes starting with a MARC21 file. I like that it's a little more direct, which I think removes some of the mystery behind the process. Good luck! |
15:41 |
jonadab |
Ok, thanks. |
15:41 |
pastebot |
"mrpeters" at 64.57.241.14 pasted "Bash script for converting MARC21 Binary to SQL for loading into Evergreen" (67 lines) at http://paste.evergreen-ils.org/20 |
15:42 |
mrpeters |
just in case you want to have a look at that |
15:42 |
mrpeters |
the quick_metarecord_map.sql is from the random magic spells page |
15:42 |
pastebot |
"mrpeters" at 64.57.241.14 pasted "quick_metarecord_map.sql" (31 lines) at http://paste.evergreen-ils.org/21 |
15:44 |
mrpeters |
and ESI has been great in continuing to offer their migration tools on their git -- git clone git://git.esilibrary.com/git/migration-tools.git |
15:45 |
Dyrcona |
kmlussier: I suppose that we should. |
15:45 |
mrpeters |
i know this says its for 2.1, and the parallel_pg_loader step has changed for ingest, but http://docs.evergreen-ils.org/2.1/html/migrating_records_using_migration_tools.html is still very relevant |
15:47 |
|
maryj joined #evergreen |
15:51 |
Dyrcona |
It still puzzles me why everyone wants to delete stuff all the time. What problem are they trying to solve that a simple deleted flag doesn't? |
15:53 |
Dyrcona |
Horizon used to have a mechanism to actually delete stuff and then people would wonder why reports were messed up or bills or whatever. |
15:53 |
kmlussier |
Dyrcona: I think that's a very good question to ask. |
15:54 |
Dyrcona |
And yet, our folks still ask about really deleting things despite memories of those problems. |
15:54 |
|
akilsdonk joined #evergreen |
15:55 |
jonadab |
Dyrcona: Same reason some people empty their trash every day even though it's only got three things in the bottom of it. |
15:56 |
jonadab |
Some people just have an emotional need to "clean up" and throw things away. |
15:56 |
kmlussier |
In the case of very large consortia, I think there is a concern of how much space is being taken up by those deleted records. |
15:57 |
jonadab |
Really? Data that humans have taken the trouble to enter, taking up enough space to be a problem in the twenty-first century? |
15:57 |
berick |
in EG, deleted records stay indexed, too, so there could be an actual impact on speed |
15:57 |
berick |
well |
15:57 |
berick |
no |
15:57 |
berick |
that's not fair |
15:58 |
berick |
they are filtered out, of course |
15:58 |
Dyrcona |
jonadab: Well, you can do that in Evergreen, if you really make the effort, and you can throw away all kinds of additional stuff that you didn't mean to. |
15:58 |
berick |
but still in the indexes |
15:58 |
Dyrcona |
berick: That depends. |
15:58 |
Dyrcona |
Some indexes exclude deleted items. |
15:58 |
berick |
Dyrcona: i'm thinking of metabib.*_field_entry (index may not be the best term there) |
15:59 |
kmlussier |
Is there a reason for that? Could we have those entries removed as soon as a record is deleted? |
15:59 |
berick |
kmlussier: there is. i went down this rabbit hole not too long ago |
15:59 |
Dyrcona |
kmlussier: Those are kept so staff can see them and resurrect them if necessary. |
15:59 |
berick |
let me see if I can find it |
15:59 |
jonadab |
Dyrcona: Believe me, I know about throwing away stuff you didn't mean to. IMO, there are two kinds of people: people who barely ever delete anything, and people who routinely waste time recreating things they've deleted. |
16:00 |
Dyrcona |
jonadab++ |
16:00 |
jonadab |
(We're a small library, so I work with the public enough to see the latter a lot.) |
16:01 |
Dyrcona |
IIRC, you can include or exclude deleted items when doing the metabib and other ingests. |
16:01 |
jonadab |
Of course, of the people who never delete anything, only about half are organized enough to _find_ the stuff they didn't delete when they need it again... |
16:01 |
berick |
bug #797238 also mentions an (apparent) desire to report on deleted record data |
16:01 |
pinesol_green |
Launchpad bug 797238 in Evergreen "Re-indexing deleted records upon modification" (affected: 1, heat: 6) [Undecided,Invalid] https://launchpad.net/bugs/797238 |
16:01 |
miker |
indeed you can (include/exclude deleted records), to support the #deleted modifier |
16:02 |
Dyrcona |
And to answer that bug further you can force a reingest on a deleted record, but I don't think it happens automatically. |
16:06 |
berick |
i started down this path before when noticing we had 32 million rows in metabib.full_rec pointing to deleted bibs. |
16:06 |
Dyrcona |
berick: understandable. :) |
16:07 |
Dyrcona |
I think the bulk of our database storage is taken up by database indexes and not data in tables, the latter including the search indexes. |
16:11 |
* miker |
pines for a day when mfr is GONE |
16:12 |
Dyrcona |
OOh... Looks like I missed something in that email. |
16:12 |
Dyrcona |
Most transactions get moved to user 1, but circulations get deleted. |
16:13 |
Dyrcona |
So, yah, you lose statistics. |
16:20 |
Dyrcona |
Hah. Evergreen doesn't allow "suicide." # No deleting yourself - UI is supposed to stop you first, though. |
16:21 |
|
rlefaive joined #evergreen |
16:22 |
bshum |
That's funny. |
16:35 |
|
mmorgan joined #evergreen |
16:42 |
|
schatz joined #evergreen |
16:46 |
* kmlussier |
shakes her fist at the power grid. |
16:47 |
* mmorgan |
has done a fair amount of that today also |
16:48 |
mmorgan |
actually, it was the thunderstorms, can't really blame the grid today. |
16:48 |
kmlussier |
I was just about to hit Save on a blog post when the power went out. :( |
16:49 |
mmorgan |
:-( |
16:50 |
Dyrcona |
Ah, bummer. |
16:50 |
mmorgan |
can't remember what I had open and unsaved when the power went out early this afternoon. |
16:50 |
mmorgan |
miraculously, power came back with less than 8 minutes left on the UPS :-D |
16:59 |
|
maryj joined #evergreen |
17:06 |
bshum |
Dyrcona++ # staff client intrigues |
17:06 |
Dyrcona |
I hope the email makes sense. :p |
17:07 |
Dyrcona |
I felt kind of rushed, since it is time to go home. |
17:07 |
Dyrcona |
And, with that, I disappear. |
17:08 |
|
mmorgan left #evergreen |
17:11 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
18:16 |
|
buzzy joined #evergreen |
18:31 |
|
jwoodard joined #evergreen |
18:52 |
|
jonadab_znc joined #evergreen |
18:53 |
|
eady joined #evergreen |
19:02 |
|
bmills joined #evergreen |
19:35 |
|
dcook joined #evergreen |
20:03 |
|
akilsdonk joined #evergreen |
20:03 |
|
maryj joined #evergreen |
20:46 |
|
rlefaive joined #evergreen |
20:58 |
|
bmills joined #evergreen |
21:31 |
|
mtj_- joined #evergreen |
21:49 |
|
mtj_- joined #evergreen |
21:57 |
|
mtj_- joined #evergreen |
22:00 |
|
mtj_- joined #evergreen |
22:37 |
|
mtj__ joined #evergreen |
23:05 |
|
mtj_ joined #evergreen |
23:08 |
|
mtj_ joined #evergreen |