Time |
Nick |
Message |
00:44 |
|
jboyer_isl joined #evergreen |
01:21 |
|
atS_Vin joined #evergreen |
01:22 |
atS_Vin |
Hello to everyone I just to ask how to add books in evergreen I keep searching in google but nothing found to my answer please help |
01:29 |
atS_Vin |
I hope some can help me... |
06:38 |
|
artunit_ joined #evergreen |
06:47 |
|
shart290__ joined #evergreen |
06:59 |
csharp |
shart290__: 'localhost' should work |
07:46 |
|
rjackson-isl joined #evergreen |
07:47 |
|
collum joined #evergreen |
07:51 |
|
jboyer-laptaupe joined #evergreen |
07:53 |
|
berick joined #evergreen |
08:03 |
|
Shae joined #evergreen |
08:11 |
|
akilsdonk joined #evergreen |
08:24 |
|
mrpeters joined #evergreen |
08:44 |
jl- |
morning |
08:49 |
|
akilsdonk_ joined #evergreen |
08:52 |
jeff |
morning! |
08:52 |
|
akilsdonk joined #evergreen |
08:53 |
jeff |
shart290__: if you're connecting to the local machine, usually you'd use the hostname "localhost" |
08:56 |
|
akilsdonk_ joined #evergreen |
08:59 |
|
timlaptop joined #evergreen |
09:23 |
|
dluch joined #evergreen |
09:46 |
|
dkyle left #evergreen |
09:48 |
|
dkyle1 joined #evergreen |
09:49 |
|
denishpatel joined #evergreen |
09:50 |
|
dkyle1 left #evergreen |
10:05 |
gsams |
csharp: thanks, I'll keep that in mind if it happens again! |
10:34 |
jl- |
got about 4 GB ram on a eg dev server (all in 1) but search still seems a little slow at times |
10:35 |
|
RoganH joined #evergreen |
10:40 |
* denishpatel |
waves hands for RoganH .. Hi! |
10:41 |
RoganH |
Good morning denishpatel ! |
10:41 |
denishpatel |
hey good morning |
10:41 |
denishpatel |
sorry, i haven't been catching up on IRC lately but when I get chance, I try to catch up |
10:42 |
|
shart290__ left #evergreen |
10:42 |
bshum |
denishpatel++ |
10:42 |
denishpatel |
thanks |
10:43 |
RoganH |
Always glad to see you around. I half pay attention. I'm doing training today for staff on our SCLENDS upgrade today. |
10:43 |
denishpatel |
let me know, if I can help review anything related to databases or performance |
10:43 |
denishpatel |
RoganH: no problem. |
10:43 |
bshum |
jl-: Define "a little slow" and also, depending on the number of records in your system, or a whole slew of other factors including just warming up the database cache, that could affect general search performance. |
10:44 |
|
shart290__ joined #evergreen |
10:45 |
shart290__ |
tried localhost in the server field and I get an error. |
10:46 |
shart290__ |
method=open-ils.auth.authenticate.init |
10:46 |
shart290__ |
params=["evergreen"] |
10:46 |
shart290__ |
THROWN: |
10:46 |
shart290__ |
Network Failure: status = <unknown> |
10:46 |
shart290__ |
service=open-ils.auth&method=open-ils.auth.authenticate.init¶m=%22evergreen%22 |
10:46 |
shart290__ |
STATUS: |
10:46 |
shart290__ |
<unknown> |
10:46 |
shart290__ |
I won't have time to troubleshoot this til this afternoon but I am curious as to whether this is another configuration issue. |
10:46 |
|
mrpeters left #evergreen |
10:49 |
jl- |
bshum: it's probably still warming the cache then .. right now we have just about 500k records |
10:50 |
|
kbeswick joined #evergreen |
10:51 |
bshum |
jl-: I don't have exact math or suggestions on ways of warming the cache for search. Our systems tend to have enough RAM to keep most/all of the DB size in RAM to speed things up and avoid disk I/O. Our horrible way of throwing hardware at the problem as it were. :\ |
10:52 |
jl- |
how do I use off? [off] ? |
10:53 |
jl- |
I'm also running I'm also running ./authority_control_fields.pl right now so I'm sure it adds some load |
10:54 |
denishpatel |
bshum: RE: Search performance, is it worth investing time to implement postgres full text search instead od ILIKE queries? |
10:54 |
eeevil |
denishpatel: evergreen uses FTS, not ILIKE |
10:55 |
denishpatel |
that's interesting. looks like I should dwell further into evergreen schema. |
10:57 |
eeevil |
denishpatel: there is one place that uses LIKE (not ILIKE), but the queries are left-anchored, and we recommend using either the C locale or text_pattern_ops |
10:59 |
denishpatel |
eeevil: thanks for details |
10:59 |
bshum |
jl-: You know, I misread what you said earlier. 500k bibs sounds like a good amount (it occurs to me that our system is only around 1.1 million actives). 4 GB of RAM might not be covering that so well and it's slow for hitting disk. |
10:59 |
jl- |
has anyone thought of using apache solr? they have all kinds of search tweaking |
11:00 |
denishpatel |
eeevil: is there a way to download evergreen's postgres schema only to load into my test server? |
11:00 |
jeff |
jl-: someone did play with solr, but we never saw any code, and i don't think that it was at all integrated with evergreen, just searching an export of records from evergreen (but again -- never saw any code, so dunno) |
11:01 |
jl- |
vufind uses solr :) |
11:01 |
jeff |
shart290__: the error you pasted appears to be a timeout between the client and server -- the client (apparently) never received a response. |
11:01 |
jeff |
jl-: yep. |
11:01 |
denishpatel |
jl-: I would prefer ElasticSearch over Solr. In my experience, if you can able get enough features from postgres FTS, you might not want to handle one more tool and overhead for integration |
11:03 |
jl- |
solr seems self-reliant from my experience |
11:04 |
jl- |
I haven't used ES |
11:07 |
jcamins |
ES is rather awesome, in my experience. |
11:07 |
eeevil |
denishpatel: it's available in the tarball, of course, and in the git repo. IMO, evaluating the schema without the context of actual queries (and, even more so, the purpose of the queries, the shape of production data, and the specific application features that may or may not be in use for a given instance of a query type) may be of little productive use ...http://git.evergreen-ils.org/?p=Evergreen.git;a=tree;f=Open |
11:07 |
eeevil |
-ILS/src/sql/Pg;h=fe8800672252db39175a8561f11a0b4ff6a5137c;hb=master and see build-db.sh (and friends) |
11:08 |
denishpatel |
eeevil: right. I just wanted to skim through the schema definition, thanks |
11:08 |
eeevil |
denishpatel: http://docs.evergreen-ils.org/2.2/_creating_the_evergreen_database.html looks relatively modern, as well, and more high-level |
11:08 |
jcamins |
Of course, EG already has functioning search, so there's that. |
11:09 |
eeevil |
jcamins: and then there's the fun ok keeping 2 things in sync. and, java. |
11:10 |
eeevil |
(for anything solr/lucene based) |
11:10 |
jcamins |
eeevil: that's what I like about ES. I don't actually have to worry about Java at all. |
11:11 |
jcamins |
It's written in Java, but as long as there's a Java binary on the system, I don't really have to think about that fact. |
11:13 |
eeevil |
that's good to hear. better than solr on the normalizer front, I assume? (as in, more of them pre-built) |
11:14 |
jcamins |
I'm not really using normalizers yet, but that's my understanding. I got fed up with Solr because I couldn't get it to work at all, though. |
11:15 |
jl- |
trying to wrap my head around organization units again: right now I have 1 consortium 1 library and 500k bib records, next is another library, with 200k bib records to be imported. how can I assign these to the specific org unit so I can search within only 1 org |
11:16 |
jl- |
I know hwo to create units but how do I assign certain bibs to that org |
11:17 |
eeevil |
jl-: bibs can be "owned" in (basically) one of two way, and the way you use depends on if it's an electronic resource or a barcoded, physical item |
11:17 |
eeevil |
for barcoded items, attach a call number and copy owned by the "owner" |
11:17 |
eeevil |
for e-resources, us a located uri (856 with an appropriate subfield 9) |
11:20 |
bshum |
jl-: Huh... so in your test system I can't find any copies. How did you make the bibs visible in search? (aka, did you just mark them as trascendent or something) |
11:24 |
bshum |
What I would normally expect in a migration is that the original MARC exported from the legacy ILS would have some sort of holdings tag containing basic info like barcode, call number, price, whatever. And then you would map those to callnumbers/copies in Evergreen, that linked to each given bib. |
11:25 |
dbwells |
jl-: If you are building an Evergreen instance from two (or more) catalogs that are currently independant (and it sounds to me like you are), you are going to want to combine and dedupe all the records from every system, preferably at an early point. This will not be a particularly straightforward task, and I doubt it's represented in the docs in any way. In fact, depending on how varied the records in your current catalogs are, it could be a monumen |
11:25 |
dbwells |
tal task by itself. |
11:25 |
bshum |
In Evergreen, bibs are all stored in biblio.record_entry, but there's the item holdings which are reflected in asset.call_number (which points back to a specific record ID), and then asset.copy (which points back at specific call number IDs) for each bib. |
11:25 |
bshum |
*bib/copy/ barcoded item |
11:26 |
bshum |
So when eeevil refers to ownership of the volumes/copies, those are the contents in those other tables. Not bibs. |
11:29 |
jl- |
right now, I don't have any items bar-coded yet (I've been dreading the task) and I figgured I'd get more records from other org's first, I have them all set to source=3 to make them all visible |
11:29 |
jl- |
so no copies for now |
11:29 |
jl- |
I do have a copies list tho |
11:31 |
jl- |
not sure what to do first, assign to org or do copies |
11:32 |
bshum |
For fun, diagram of what I just said describing how bib/volume/copy info is organized in Evergreen: http://ur1.ca/h4zj2 |
11:33 |
jl- |
here's a sample of the copies list |
11:33 |
jl- |
http://paste.debian.net/hidden/cb638e90/ |
11:33 |
jl- |
the number in there seems to match the number of bibs records |
11:33 |
bshum |
Caveat, I don't know how to use Call numbers, ignore my poor cataloging senses. |
11:33 |
jl- |
I like the diagram |
11:33 |
bshum |
But this is just to illustrate the concept. |
11:34 |
bshum |
So, for ownership |
11:34 |
bshum |
Each call number and copy can be owned by a given org unit (library) |
11:34 |
jl- |
right |
11:34 |
bshum |
So let's say, the first record, where you've got two different volumes |
11:34 |
bshum |
And two copies for each volume |
11:35 |
bshum |
Those might be owned by two different libraries |
11:37 |
bshum |
So, really, before you bring in the next org, you'll probably want to figure out the copies. So that it's possible for a single bib to have holdings for different libraries. |
11:38 |
bshum |
And this goes back to what dbwells was saying about merging and de-duplicating of records. Because if both libraries have a record for the same Star Trek book, then it'd be good to merge them and have a single bib record, with different holdings for those libraries. |
11:38 |
bshum |
Otherwise, you'll have lots of duplicate records in searches that spanned libraries. |
11:38 |
bshum |
But that also depends on how the system is intended to be used / searched, I guess. |
11:40 |
bshum |
But let's say that having combined bibs saves you room in the database. |
11:42 |
bshum |
In any case, I would create all the libraries you think you're going to need as org-unit entries. Just so that you get the sorted. And then as you bring in new bib/copies, you just need to figure out how to correctly mark the volume/copies as owned by a specific org unit. You could leave bib deduplication to later on. |
11:42 |
jl- |
I understand, this seems very important but for now I think it would be overkill. If I can have two orgs with copies each, that would be fantastic |
11:42 |
jl- |
and if I have some duplicates, that's ok for testing the ILS |
11:43 |
jl- |
with this I mean deduplication |
11:43 |
jl- |
:) |
11:43 |
bshum |
yes, you can always create multiples of bibs, so basically like the "Star Wars" example, lots of times over. But you just set the ownership for each call number / copy accordingly. |
11:43 |
bshum |
And eventually poke at merging things. |
11:43 |
jl- |
anyway, so how do I actually attach copies, is there documentation? I have an items.txt with 400 entries |
11:43 |
jl- |
http://paste.debian.net/hidden/cb638e90/ |
11:43 |
bshum |
Just means a slightly messier system |
11:43 |
jl- |
here are the first 200 |
11:43 |
jl- |
400k entries |
11:43 |
jl- |
I meant |
11:44 |
bshum |
So now we're on the new topic of how to add copies and this is where I'll defer to other potential experts. Because I have limited first-hand experience on this. |
11:45 |
* bshum |
goes back to the corner |
11:45 |
jl- |
:( |
11:45 |
bshum |
But just eyeballing it |
11:45 |
bshum |
I'm wondering where the call number is |
11:46 |
bshum |
For a given copy |
11:46 |
jl- |
2nd or 3rd column |
11:46 |
* bshum |
turns around slightly to peer from afar |
11:46 |
bshum |
They're just numbers? |
11:46 |
bshum |
Or are those some sort of ID? |
11:46 |
jl- |
dunno :p |
11:46 |
* bshum |
assumed one of those columns is the bib record's ID |
11:47 |
jl- |
it's from a voyager export |
11:47 |
jl- |
how can electronic records be owned by an org? |
11:47 |
jl- |
right now, they are all set to source=3 so they are marked as eletronic |
11:48 |
|
mcooper joined #evergreen |
11:51 |
bshum |
So, electronic records in Evergreen can also be "owned" by a specific org unit. |
11:51 |
jl- |
nice |
11:51 |
jl- |
how? |
11:52 |
bshum |
The trick is that the 856 entry is usually modified to include a new subfield 9 (that's a particular native to Evergreen) which has a value of the library's shortname |
11:52 |
bshum |
When the system sees that, it'll create a special volume that's owned by the library indicated by the shortname |
11:52 |
bshum |
And that'll be viewable in certain scoped searches that include that library. |
11:53 |
bshum |
(that's the surface level default behavior, there's more details in how that works) |
11:53 |
jl- |
makes some sense, is this done before importing |
11:53 |
jl- |
or after? |
11:53 |
|
RoganH left #evergreen |
11:54 |
bshum |
jl-: If it was me, I'd use an external MARC editor to change the 856 to include the necessary $9 prior to import. |
11:54 |
jl- |
I can use sed |
11:54 |
bshum |
But I'm less confident in MARC, and tend to ask my catalogers to help me with things like that :) |
11:54 |
jl- |
I hate marc, and I have no catalogers |
11:54 |
jl- |
:P |
11:54 |
bshum |
Actually these days, they just do it without my interaction. So we all win? |
11:55 |
jl- |
nice, so you can go on vacation |
11:55 |
phasefx |
jl-: I think the MarcEdit program may let you do changes like that in batch, as well |
11:55 |
jl- |
phasefx: and that would be done prior importing? |
11:55 |
phasefx |
right |
11:56 |
jl- |
any documentation on this? seems like a common need |
11:59 |
|
finnx joined #evergreen |
12:03 |
|
atlas__ joined #evergreen |
12:05 |
phasefx |
jl-: looks like the documentation abstracts it as "modify the records using your preferred MARC editing approach to ensure the 856 field contains..." :} |
12:06 |
jeff |
As a heads-up for anyone heavily filtering their list mail, I've called for a vote (as discussed at the last dev meeting) on bshum's proposal to be Release Manager for 2.7: http://georgialibraries.markmail.org/thread/iwrim6qgyp6ratvk |
12:21 |
bshum |
Okay, google doc diagram is prettier. I can go eat my lunch happy that's it's got some color now. |
12:22 |
jeff |
heh |
12:38 |
|
mllewellyn joined #evergreen |
12:45 |
|
bmills joined #evergreen |
12:50 |
|
montgoc1 joined #evergreen |
12:54 |
|
RoganH joined #evergreen |
12:56 |
|
RoganH left #evergreen |
13:18 |
|
b_bonner joined #evergreen |
13:25 |
|
dkyle joined #evergreen |
13:26 |
|
Shae joined #evergreen |
13:26 |
csharp |
bshum++ # evergreener of the hour |
13:30 |
* jeff |
replies to his own thread |
13:30 |
* jl- |
staring into the abyss with marcedit |
13:30 |
jeff |
(had to give it some time :-) |
13:31 |
jl- |
Field 856: 37425 occurrences. Found in 29878 records. |
13:31 |
jl- |
does this mean those are marked as electronic ? |
13:32 |
jeff |
no. |
13:32 |
jeff |
just that they have links. |
13:32 |
|
hbrennan joined #evergreen |
13:36 |
jeff |
in Evergreen, "located URI" logic is triggered when 856 ind1 is 4 and ind2 is 0 or 1, and there is a subfield 9 present with a string value that corresponds with an org unit's "shortname". see http://docs.evergreen-ils.org/dev/_migrating_from_a_legacy_system.html#_making_electronic_resources_visible_in_the_catalog |
13:37 |
jeff |
so you might have records with 856 tags that point to things like "author info" at loc.gov, etc. those aren't treated specially (assuming they don't have those values for ind1/ind2 and an appropriate subfield 9) |
13:37 |
jl- |
jeff: I want to create a subfield 9 in 856 and give it a shortname which I'll use for an org unit |
13:40 |
jl- |
marcedit will let me add any field and field data |
13:41 |
jeff |
i would recommend starting with a single test record. |
13:42 |
jeff |
ensure that you're getting the behavior you want before doing any kind of batch/mass updates. |
13:50 |
jl- |
yuo |
13:50 |
jl- |
yup |
13:51 |
jl- |
bshum: it seems like I can batch edit (and add) fields to records in the staff client, so it seems like adding an org shortname would not have to happen prior to importing ? |
13:52 |
bshum |
jl-: If you do not have the corresponding org unit, I would expect it to fail to create anything, forcing you to have to reingest (aka, hard refresh/reload) the bibs you import that don't have matching org unit entries. |
13:52 |
bshum |
It'll add the MARC record, but won't create an electronic record entry. |
13:53 |
bshum |
And then when you do add the org unit with the matching shortname, the existing record will not update itself, requiring forced manual updating of all bibs. |
13:53 |
bshum |
Might as well set the org units first and do it right from the get go. |
13:53 |
jl- |
agreed, I will create the org unit first |
13:53 |
jl- |
and then use the batch edit to add in the extra field (and subfield) with the shortname ? |
14:26 |
|
ktomita joined #evergreen |
14:31 |
|
ktomita_ joined #evergreen |
14:53 |
jeff |
csharp++ |
14:53 |
jeff |
gpls++ |
14:53 |
jeff |
awitter++ |
14:53 |
bshum |
csharp++ awitter++ gpls++ |
14:59 |
hbrennan |
When I see karma points being handed out sans IRC conversation, and with no new emails... I always have a feeling I'm missing out on some news |
15:01 |
bshum |
hbrennan: It just means that the shadow council of thirteen has just passed judgement. |
15:01 |
hbrennan |
Aw shucks, I knew there was a secret society |
15:02 |
hbrennan |
phasefx++ Bashee wail.. haha. |
15:02 |
hbrennan |
Well there. I just received my first @ later |
15:02 |
phasefx |
there was cow bell too. can't have too much cow bell |
15:02 |
hbrennan |
phasefx: Never |
15:10 |
csharp |
@who is in charge of the Elders of the Internet? |
15:10 |
pinesol_green |
Sato`kun is in charge of the Elders of the Internet. |
15:16 |
jeff |
not to pick on Sato`kun, but leave it to pinesol_green to answer a vaguely mysterious/conspiracy-like question with a nickname that hasn't recently been active and is connected from a server in Lithuania. :-) |
15:17 |
jeff |
(none of those things are really that unusual or noteworthy, i'm just being silly) |
15:17 |
hbrennan |
I, too, found that interesting |
15:17 |
bshum |
@who stole a cookie from the cookie jar |
15:17 |
pinesol_green |
artunit stole a cookie from the cookie jar. |
15:23 |
gsams |
bshum++ #shadow council of thirteen |
15:32 |
dbwells |
bshum jeff: so what was the real reason for the ++s, or are you just going to leave us in the dark? |
15:33 |
jeff |
oh, sorry. i thought someone explained it. |
15:33 |
bshum |
Working on things for the community web server. |
15:34 |
jeff |
GPLS has taken some spare hardware and finished initial provisioning as a community VM host, so that things like the git and web servers can be on hardware that more than just csharp can restart, potentially other things like test servers/buildbots, etc. |
15:34 |
hbrennan |
Haha. Glad I wasn't the only curious one |
15:35 |
jeff |
A step toward csharp being able to go on vacation. :-) |
15:35 |
dbwells |
jeff: Is this what csharp and a few others of us were chatting about at the end of the conference? |
15:35 |
jeff |
yep! |
15:36 |
dbwells |
thanks |
15:36 |
eeevil |
for the Shadow Council: 2.4.7 is in the previews dir, I'll be announcing it soon on-list |
15:37 |
jl- |
hm so I can add 856 and subfield 9 in marc edit (staff client) -- where do I put the org shortname? |
15:37 |
dbwells |
Also, while I'm here, in case anyone missed the quiet, IRC-only announcement at 10:30pm last night, rel_2_6 is branched now. I'll do my best to send a "close out" email during the week, but we should consider 2.6.0 as released. |
15:38 |
bshum |
dbwells++ |
15:39 |
jeff |
in addition to csharp and andy at GPLS, gmcharlt, bshum, and I are starting the ball rolling. Nothing very formal at the moment, just getting our feet wet. |
15:39 |
jeff |
(Some of us are new at this whole cabal thing.) |
15:39 |
jeff |
dbwells++ hooray! :-) |
15:41 |
eeevil |
dbwells++ # shaming me into action since 2010! ;) |
15:42 |
gmcharlt |
dbwells++ |
16:17 |
jl- |
how do I limit my search scope to only show results from a specific org unit? |
16:17 |
jl- |
I gave one record a datafield 856 sub 9 SHIP and I have an organization unit called SHIP |
16:22 |
jeff |
jl-: since Shippensburg University is org unit id 2, you'd add the parameter locg=2 to your query URL -- or just select that org unit from the Library: input in a search form. |
16:27 |
jl- |
jeff: true, it is locg2 but it's also showing the records from the consortium not locg2 only |
16:27 |
jl- |
http://evergreendev.klnpa.org/eg/opac/results?query=henry+crabb&qtype=author&fi%3Aitem_type=&locg=2&sort= |
16:29 |
eeevil |
jl-: that's how located URIs work -- they model licensing of electronic materials. however, there's a switch in 2.6+ that makes them act eactly like copies: https://bugs.launchpad.net/evergreen/+bug/1271630 |
16:29 |
jl- |
I'll be back tomorrow, thanks for the help :)) |
16:29 |
pinesol_green |
Launchpad bug 1271630 in Evergreen "Allow Located URIs to supply copy-like visibility to bibs" (affected: 1, heat: 6) [Wishlist,Fix committed] |
16:30 |
jl- |
eeevil: in my case they are all electronic and I need some kind of search scope that restricts it to one of the libraries |
16:31 |
eeevil |
jl-: well, if the source is transendent, that's the ballgame. if you've added located URIs you can null the source and enable the opac.located_uri.act_as_copy global flag |
16:32 |
jl- |
eeevil: thanks Ill have to look into that tomorrow |
16:32 |
jl- |
good afternoon |
16:32 |
eeevil |
good night, and good luck |
16:32 |
eeevil |
;) |
16:34 |
jeff |
jl-: of the four records i see in your Henry Crabb search above, only one (30763) has an 856 tag, and that tag is incomplete (no indicators, empty-looking subfield 9) |
16:35 |
jeff |
jl-: and since none of the records appear to have visible copies, I'm assuming that their bib source is transcendent, which as eeevil noted is "the ballgame" :-) |
16:36 |
jeff |
at this point, i'm not sure we have enough information to recommend for/against the use of opac.located_uri.act_as_copy |
16:36 |
jeff |
(in jl-'s situation) |
16:37 |
eeevil |
jeff: true. if these are bibs that correspond to physical items on shelves in a library, the best thing to do would be to get that data from the current system and attache the items. minimally, that would be the call number label and the barcode. all else is of little import for a surface test of the system |
16:38 |
eeevil |
jl-: -^ |
17:13 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
17:25 |
|
atlas___ joined #evergreen |
17:53 |
|
RBecker joined #evergreen |
18:06 |
|
alynn26 joined #evergreen |
19:02 |
|
dcook joined #evergreen |
19:18 |
|
j_scott joined #evergreen |
19:19 |
|
j_scott joined #evergreen |
19:21 |
|
j_scott left #evergreen |
20:11 |
|
fparks joined #evergreen |