Time |
Nick |
Message |
07:16 |
|
TARA joined #evergreen |
07:33 |
|
agoben joined #evergreen |
07:55 |
|
krvmga joined #evergreen |
07:55 |
|
kmlussier joined #evergreen |
08:01 |
JBoyer |
kmlussier, to answer your question from yesterday, we have a few libraries that use the web client with chromebooks from time to time, but no one uses it heavily. |
08:08 |
|
Dyrcona joined #evergreen |
08:21 |
|
plux joined #evergreen |
08:21 |
Dyrcona |
krvmga: I forgot to mention in my email how to delete a remote git branch. Do you know how to do that, already? |
08:21 |
krvmga |
Dyrcona: yes, i've had to do it before in the MassLNC.git :) |
08:21 |
Dyrcona |
OK. Good. |
08:22 |
krvmga |
kmlussier walked me through it. |
08:22 |
|
jeff joined #evergreen |
08:22 |
Dyrcona |
I just deleted the two that I made yesterday. I'm going to rebase 'em on different branches. |
08:22 |
krvmga |
did you delete that 2.10 one i pushed? |
08:22 |
Dyrcona |
Think I'll rebase the alt_patron_summary branch on master and on rel_2_10. |
08:23 |
Dyrcona |
I can't delete your branches, though I can change that. :) |
08:23 |
krvmga |
LOL, it's okay. i'll take care of it in a few minutes. |
08:23 |
* krvmga |
is enjoying learning. |
08:23 |
Dyrcona |
phasefx++ for pushing alt_patron_summary_rel_2_10_5. |
08:24 |
Dyrcona |
Same here. I learned some interesting things doing the rebases yesterday, so time was not wasted. |
08:25 |
Dyrcona |
I'll push the alt_patron_summary branches to the community working repo as well as to our own where I'm preparing for the upgrade. |
08:25 |
Dyrcona |
That's in case anyone wants to see them. I believe we're the only ones using it, and hopefully for not too much longer. Maybe 1 more year? |
08:44 |
Dyrcona |
hmm... That didn't work as well as I'd hoped. I think I'll have to cherry-pick... |
08:48 |
|
mmorgan joined #evergreen |
08:56 |
|
rjackson_isl joined #evergreen |
09:03 |
* dbs |
reads http://blog.mvlcstaff.org/2013/05/what-has-been-going-on.html and wonders what mvlc's apache startservers, minspareservers etc are set to now |
09:06 |
dbs |
as well as how much memory is being given to memcached these days |
09:08 |
* dbs |
is now trying to figure out why a load of /eg/opac/advanced -- just the basic template -- is taking ~15 seconds, before it even kicks off any JS requests |
09:10 |
dbs |
top isn't turning up any persistently high-CPU processes |
09:12 |
jeff |
tried turning on DEBUG_TIMING in OpenILS::WWW::EGCatLoader? |
09:12 |
jeff |
checking for multi-second db queries? |
09:13 |
jeff |
taking that long when logged in, logged out, or both? |
09:15 |
dbs |
jeff: I did that on day one, but will want to tweak the template footer to hide the data from mere mortals if I turn it on again |
09:15 |
dbs |
and that's logged out, which is highly disturbing |
09:16 |
dbs |
memcached stats shows ~3900 evictions, which seems reasonable |
09:20 |
jeff |
we have zero, but that doesn't mean that memcached is your issue. |
09:20 |
dbs |
I would expect anything which should be absolutely static like /eg/opac/advanced with no query args to respond in sub-second times; when things are going well here the response time for the advanced page drops to 3 seconds, but that's as good as I've seen |
09:20 |
jeff |
can your apache hosts connect to memcached successfully? |
09:20 |
dbs |
jeff: yeah, I realize that -- so many moving parts to tweak |
09:21 |
jeff |
any cache-related errors in the logs? |
09:21 |
dbs |
tested the apache memcache translator setting of 127.0.0.1:11211 via telnet and that works, no errors reported in the logs |
09:21 |
jeff |
there are things that are fetched from config, from the db, and from memcached... also the template toolkit cache... |
09:21 |
dbs |
yep |
09:22 |
* dbs |
will get back onto debug_timing, whoever thought of adding that was a GENIUS |
09:23 |
dbs |
ooh, hit < 2 seconds for that attempt! |
09:23 |
jeff |
heh |
09:24 |
dbs |
and then back to 4.5. I just feel for our circ staff and cataloguers who are having to deal with this :/ |
09:25 |
jeff |
interesting. i see no large performance hit when i load /eg/opac/advanced on an apache host that is prevented from connecting to memcached (connection attempts are dropped, not rejected) |
09:26 |
jeff |
oh, and on something you said above: i think your opensrf config files are going to affect memcached usage in tpac, and the OSRFTranslatorCacheServer is going to be exclusive to the translator endpoint. |
09:28 |
jeff |
do you have slow query logging enabled on postgres, or can you enable it? alternately, if you hit 4-15 seconds you might even catch one with polling the pg_stat_activity by hand |
09:29 |
dbs |
our hosts have slow query logging enabled, but I don't think I have direct access, so pg_stat_activity might be the way to go |
09:29 |
|
yboston joined #evergreen |
09:35 |
jeff |
not that a production system is often truly quiescent, but does this page load slowly during "off hours" also? |
09:37 |
dbs |
holy crow. after reloading apache2, my first connection showed initial connection of 12.7 seconds, ssl of 12.7 seconds, and then 34 seconds of waiting |
09:39 |
dbs |
It seems that the apache processes need to be initialized, subsequent requests don't show those kinds of slowdowns (157ms and 83.5ms for initial connection & ssl), so that's an outlier |
09:40 |
Dyrcona |
dbs: yes, I've seen that Apache is really slow just after start up. Load also tends to stay high for a couple of minutes, then settles down. |
09:40 |
dbs |
hah, there aren't many events in the stock /eg/opac/advanced page it seems, just New page and initial load (At 0.1836: Initial load) |
09:40 |
csharp |
same here - just saw that on my test box |
09:40 |
dbs |
btw, Dyrcona, thank you for http://blog.mvlcstaff.org/2013/05/what-has-been-going-on.html |
09:43 |
jeff |
worst case i can get is 2.30s Load time (per chrome devtools) on a fresh apache start, even with cstore intentionally hobbled to 1 worker. |
09:43 |
jeff |
on a fresh apache start. |
09:43 |
jeff |
and that's the full page load, including javascript. |
09:44 |
jeff |
hrm. |
09:44 |
dbs |
hrm indeed! |
09:46 |
|
maryj joined #evergreen |
09:46 |
Dyrcona |
Possibly differences in hardware and more likely configuration. |
09:46 |
dbs |
yeah, our dev environment took 4 seconds from apache2 restart |
09:47 |
Dyrcona |
Our production servers were pretty much always busy so I chalked it up to everyone hitting it at once. |
09:47 |
Dyrcona |
yeah, it's faster when no one is trying to use it. :) |
09:47 |
dbs |
and it's very similarly configured (same hardware, but the configs for memcached / apache have drifted since I started trying to tune this beast) |
09:47 |
dbs |
Dyrcona++ |
09:48 |
Dyrcona |
I generally use a stock configuration on development vms. I can't be bothered to tweak it for one or two users. |
09:48 |
jeff |
dbs: how long does it take to curl something like /IDL2js ? |
09:49 |
jeff |
(to remove a few things like template toolkit from the mix, but still be poking at apache and mod_perl) |
09:49 |
dbs |
1.8 seconds |
09:49 |
dbs |
jeez |
09:50 |
dbs |
our staff clients must love parsing all of that |
09:51 |
Dyrcona |
Better the client than I. ;) |
09:52 |
dbs |
from the same server, that drops to 0.4 seconds usually, occasionally 0.05 seconds |
09:52 |
jeff |
adjust -n for however hard you want to hit the server, but i'd be interested in seeing what something like this shows: ab -c1 -n100 http://example.org/IDL2js |
09:52 |
jeff |
(that's 1 concurrent connection, 100 requests) |
09:55 |
|
TARA joined #evergreen |
09:57 |
|
mdriscoll joined #evergreen |
10:04 |
|
jvwoolf1 joined #evergreen |
10:05 |
* dbs |
was just diffing our entire branch against rel_2_10 looking for clearly questionable diffs, nothing popped up |
10:06 |
dbs |
jeff: hmm. 100% of the requests were 164ms or less |
10:08 |
* dbs |
tries again with https instead |
10:09 |
dbs |
as that might have just reflected the speed at which redirects from http to https were returned? |
10:11 |
JBoyer |
dbs, that's most likely it. I tried running ab against our bare URL and thought that consistent single-digit millisecond times were incredible. It is very quick to return a 301. :/ |
10:11 |
dbs |
that's more like it, a mean time of 1328ms |
10:11 |
dbs |
JBoyer++ |
10:11 |
dbs |
longest being 2157ms |
10:11 |
JBoyer |
Although it is informative that your redirects to https are taking 100ms+ |
10:12 |
dbs |
indeed |
10:13 |
JBoyer |
Though that may depend on where the redirect is in your eg_vhost.conf. We're not using it so I don't know how long a redirect takes normally. (the / -> eg/opac/home redirect is near the top.) |
10:16 |
dbs |
JBoyer: right at the bottom, it's the stock "if not https, then redirect". good thought |
10:17 |
dbs |
oh, and transfer rate was "275.39 [Kbytes/sec] received" running from my laptop over wifi |
10:19 |
dbs |
I assume when I see "Apache2::RequestIO::print: (32) Broken pipe at /usr/lib/perl5/Template.pm line 180" that's because bingbot or baidu or whatever UA just dropped a connection |
10:21 |
JBoyer |
Wow. Ok, I enabled that http -> https redirect at the bottom of eg_vhost.conf and it takes at most 1 ms on some older leftover hardware. |
10:22 |
dbs |
fantastic |
10:22 |
dbs |
we do have about 20 virtualhosts configured for both 443 and 80, so undoubtedly lots of parsing overhead |
10:23 |
JBoyer |
Could be, this machine hosts nothing else and it's generally hit by outside users. |
10:24 |
JBoyer |
NOT generally hit by users, that is. |
10:24 |
Dyrcona |
So, you're testing against one of 20 virtual hosts on the same hardware? Could be the hardware is slightly overtaxed. |
10:26 |
dbs |
Dyrcona: yep |
10:26 |
dbs |
although load seems fine |
10:26 |
dbs |
at 0.75 for the last fifteen minutes, on an "8 core" VM |
10:30 |
JBoyer |
dbs, referring to your speed comment above, I'm seeing ~200Kb/s for our homepage on the same gigabit network. TCP startup speeds I'm guessing? With pages that "small" the speed may never be able to top out. ( I also see 2000Kb/s when testing localhost on it's public IP) |
10:30 |
dbs |
total of 14 cstore timeout errors reported over the past 24 hours (determined by grep "perl:error" /var/log/syslog | grep cstore) |
10:33 |
jeff |
also laptop-on-wifi here. local system gets me 111.196 ms mean over 100 requests (11.120 seconds total); webby gets 740.887 ms mean over 100 requests (74.089 seconds total) |
10:34 |
jeff |
that seems to be more than just rtt hurting me there. second guess is ssl tuning, but in both cases ab selected TLSv1.2,ECDHE-RSA-AES256-SHA384,2048,256 |
10:35 |
dbs |
yep, same tls selection here |
10:35 |
jeff |
going over http to webby is 106.607 ms mean, 10.661 total. |
10:36 |
dbs |
there's a handful of cstore complaints about Jabber not having anyone at open-ils.actor to send results to |
10:36 |
jeff |
anyway, what was the breakdown for you in terms of time spent in the connect/processing/waiting stages? |
10:36 |
dbs |
7 in the last 24 hours |
10:37 |
jeff |
oh, in webby's http case it's redirects. |
10:37 |
dbs |
medians were 231/996/80 respectively |
10:37 |
jeff |
so it's rtt and server side processing. |
10:37 |
jeff |
(for me in webby's case) |
10:37 |
jeff |
(not making a call on yours) |
10:37 |
dbs |
well probably though :) |
10:41 |
dbs |
y'all are the best, btw |
10:45 |
dbs |
hmm, 11GB of memory, "free" shows 11G used, 290m free, 272m cached. no swap |
10:48 |
dbs |
top apache processes are using around 200m each, i've set them to serve a max of 5000 requests which seemed like a compromise between the doc's recommendations of 10000 and Dyrcona's findings that 1000 was better for mvlc; maybe I should move closer to 1000 |
10:49 |
jeff |
lower value will likely decrease memory pressure and increase the number of instances of "that page load took a while" |
10:50 |
dbs |
yeah, that's why I was hedging for a middle ground |
10:54 |
jeff |
do you have many instances of "No children available, waiting..." in logs? |
10:56 |
jeff |
or many matches on "Message processing duration [^0]" |
11:03 |
|
jihpringle joined #evergreen |
11:05 |
dbs |
oh hey that's neat, memcached apparently just got killed off. Speaking of memory pressure. |
11:13 |
dbs |
to get back to you, jeff, now that I've reduced the number of maxspareservers and restarting everything in hopes of avoiding _that_ nasty situation again... |
11:14 |
dbs |
we did have some instances of that back when we went live and inadvertently left cstore at 15 instead of 45 |
11:15 |
dbs |
but checking more recently, opensrf.settings and open-ils.supercat have both run into that situation in the past day. Good lead! |
11:17 |
csharp |
dbs: during our post-upgrade craziness we upped our memcache connection limit to 4096, which kept that particular symptom from happening |
11:18 |
jeff |
if you're running memcached on the same instance as apache, it's going to by default be a pretty big target for the OOM killer. |
11:18 |
jeff |
there are some settings you can look into that may protect it a bit more, or you can run memcached on a different instance. |
11:19 |
dbs |
yeah, single server instance here |
11:19 |
jeff |
(we don't do the former, so i'm not sure how effective those will be) |
11:19 |
dbs |
csharp: huh, the number of connections was resulting in memcached getting killed off? |
11:20 |
dbs |
we have the option of spreading out on to a second server though, so maybe we'll fast-track that if we can't find a reasonable configuration |
11:21 |
csharp |
dbs: iirc, yeah |
11:21 |
csharp |
and we have two (non-redundant) memcache servers |
11:23 |
dbs |
when I ran the memcached stats earlier today, # of connections was ~150. That would be a big spike to hit 1024, but maybe |
11:23 |
dbs |
definitely good to know about! |
11:25 |
csharp |
what we saw was the DB connections filling up, which caused cstore to spawn multiple processes and memcached connections. In fact, memcached connections hitting the limit was the first symptom we saw. |
11:25 |
dbs |
jeff++ JBoyer++ csharp++ Dyrcona++ # thanks for the (technical, but more importantly moral) support |
11:25 |
csharp |
@coffee dbs |
11:25 |
* pinesol_green |
brews and pours a cup of Kenya Peaberry Ruera Estate, and sends it sliding down the bar to dbs |
11:25 |
csharp |
@praise dbs |
11:25 |
* pinesol_green |
dbs is one of the few who deserves to be praised |
11:26 |
* dbs |
looks at calendar -- oh hey I have to get into the office and cover reference! how time flies when you're adminning systems |
11:26 |
JBoyer |
Something to consider might be the interaction between the max number of apache processes and the max number of cstore/pcrud/etc. opensrf children. If you have a cstore max of 45 but an apache max of 150, you could have a lot of apache processes twiddling their thumbs. |
11:28 |
Dyrcona |
Yeah, MVLC bumped cstore max to 100 or so at migration and it may be higher now. We found that our biggest library doing their pull list used 45-50 cstore drones. |
11:33 |
|
rlefaive joined #evergreen |
11:42 |
mmorgan |
I just discovered lp 1594937. This sounds hugely problematic. For those on 2.10, are you finding it so? |
11:42 |
pinesol_green |
Launchpad bug 1594937 in Evergreen 2.10 "Closed Dates Editor Displaying Incorrect Closed Duration" [Undecided,Confirmed] https://launchpad.net/bugs/1594937 |
11:43 |
Dyrcona |
Oh, neat. I confirmed it.... |
11:43 |
|
bmills joined #evergreen |
11:44 |
mmorgan |
:) |
11:45 |
|
kmlussier joined #evergreen |
11:45 |
jihpringle |
mmorgan: it's super confusing for our libraries |
11:46 |
mmorgan |
Is it a display issue in the Closed dates editor, or is it actually making the closing for the two days? |
11:46 |
jihpringle |
it's a display issue, the closed date is correct in the database |
11:46 |
Dyrcona |
I think it is only a display issue. |
11:47 |
* csharp |
cranks up git blame and goes lookin' for the cause |
11:47 |
mmorgan |
Ah. OK, just hugely confusing, then. |
11:47 |
jihpringle |
we're anticipating a lot of issues in December/January when libraries go to enter their closed dates for the year |
11:48 |
bmills |
p[[[[\]]]]]]]]]]]]]]]]]]]]]]]]]]]]] |
11:48 |
csharp |
1 | 4 | 2016-09-08 00:00:00.164-04 | 2016-09-08 23:59:59.164-04 | |
11:48 |
csharp |
it's definitely a display thing^^ |
11:53 |
bmills |
sorry, cat on keys there for a second |
11:53 |
kmlussier |
@blame cats |
11:53 |
pinesol_green |
kmlussier: cats stole bradl's tux doll! |
11:54 |
mmorgan |
bmills: Gesundheit! |
11:56 |
mmorgan |
jihpringle: I would anticipate a lot of confusion among our users as well. |
11:56 |
* mmorgan |
goes to Launchpad to comment on the bug |
12:05 |
|
bmills joined #evergreen |
12:12 |
|
sandbergja joined #evergreen |
12:19 |
|
brahmina joined #evergreen |
12:19 |
dbs |
cats!! |
12:27 |
* dbs |
wonders if there's an Outreachy-type program that LITA could work with (or run, I guess) for getting people already in the library world, but interested in strengthening their library tech skills, involved with Evergreen and Koha (and indexdata and such) |
12:27 |
* dbs |
thinks kmlussier is 100% on oint |
12:27 |
dbs |
point |
12:33 |
|
TARA joined #evergreen |
12:55 |
|
jvwoolf1 joined #evergreen |
13:06 |
csharp |
@decide bug squashing or bug squishing |
13:06 |
pinesol_green |
csharp: go with bug squishing |
13:13 |
csharp |
ede7e789 |
13:13 |
pinesol_green |
csharp: [evergreen|Bill Erickson] webstaff: browser client: Remove closed dates editor XUL-y requirements - <http://git.evergreen-ils.org/?p=Evergreen.git;a=commit;h=ede7e78> |
13:14 |
csharp |
^^this is a candidate for why the date display might be working differently |
13:15 |
csharp |
does it work in the web client? |
13:16 |
csharp |
works the same way on webbby |
13:16 |
csharp |
webby, too |
13:18 |
Dyrcona |
So, the dates are displayed wrong in the web client? |
13:18 |
csharp |
yep |
13:18 |
Dyrcona |
I don't think I checked that. |
13:18 |
Dyrcona |
OK. |
13:18 |
csharp |
From 2016-09-09 at 00:00 through 2016-09-10 at 23:59 |
13:18 |
Dyrcona |
I throw that on my pile of things to look at but someone else will likely get there before me. :) |
13:39 |
|
TARA joined #evergreen |
13:44 |
* mmorgan |
noticed the days closed thing on webby first, then went back to xul and saw the same thing :-/ |
13:47 |
|
rlefaive joined #evergreen |
13:48 |
rlefaive |
Bib sources: can we just delete ones we’re not using anymore? (not attached to records, etc.) |
13:51 |
Dyrcona |
Yes, that should be OK. |
13:53 |
rlefaive |
DEFAULT nextval('config.bib_source_id_seq'::regclass) will be ok with gaps in the id sequence? |
13:53 |
rlefaive |
Thanks Dyrcona! |
13:57 |
dbs |
Yep, sequences work well for deleted rows, it's when you insert a row with an ID that's ahead of the sequence where it gets tripped up (until it passes by that value) |
14:06 |
Dyrcona |
Yeah, you don't want to mess with the sequence. |
14:35 |
|
ssieb joined #evergreen |
14:37 |
ssieb |
I've imported a bunch of MARC records using the import option. They're in the queue and they kind of exist in the catalog. What is the next step to make them actually available? |
14:38 |
Dyrcona |
ssieb: Add copies. |
14:39 |
|
TARA joined #evergreen |
14:40 |
ssieb |
I was hoping that the import process would do that. Is there some way to make that happen? |
14:40 |
dbs |
ssieb: http://docs.evergreen-ils.org/2.10/_cataloging_2.html does have some info about batch importing, if that's what you were talking about |
14:41 |
dbs |
specifically http://docs.evergreen-ils.org/2.10/_import_item_attributes_2.html -- but we don't use that here, so I'm not much practical help |
14:42 |
Dyrcona |
ssieb: Evergreen won't automatically create records if the data is not already in the records being imported. |
14:42 |
Dyrcona |
oops. "create records" should be "create copies." |
14:42 |
Dyrcona |
you will have to get the information for copies into the file of records before hand. |
14:42 |
ssieb |
It is, but I think I messed up the import profile. |
14:43 |
ssieb |
Is there some way to delete all these records from the catalog? |
14:44 |
Dyrcona |
Completely? Not without wiping out the database and starting over. |
14:44 |
Dyrcona |
You can try a delete from biblio.record_entry; in the database. That will flag them as deleted. |
14:45 |
Dyrcona |
There are ways to completely delete the records, but it is dark, and ever-changing voodoo. <- That is not a joke. |
14:45 |
ssieb |
hmm, at this point it would probably be easier to recreate the database |
14:45 |
Dyrcona |
At the beginning, it usually is. :) |
14:54 |
dbs |
overlay? |
14:55 |
dbs |
oh yeah, if you're not in production yet, blow it away and start again :) |
14:55 |
ssieb |
Oh right, I suppose I could do the import again and overlay the missing info... |
14:56 |
Dyrcona |
yeah, overlay should work. |
14:57 |
ssieb |
I'm just a little confused why it says the records were all imported, but there's a queue action for importing records |
14:57 |
ssieb |
And what's the difference between records in queue and items in queue? |
14:57 |
Dyrcona |
Well, they're imported into a queue to be approved or have duplicates resolved. |
14:58 |
ssieb |
And once they're in the queue, what do I do with them? |
14:58 |
ssieb |
Because they already show up in searches. |
14:58 |
Dyrcona |
They show up in searches? In the staff client or the OPAC or both? |
14:59 |
Dyrcona |
The records are in if they show up in searches. The queue is mainly for review of matching records. |
14:59 |
ssieb |
Ah! Only in the staff client, not in the user interface. |
15:00 |
ssieb |
Ok, so how do I get them out of the queue? |
15:00 |
|
TARA joined #evergreen |
15:01 |
Dyrcona |
That last question, I don't know the answer, too. I always assumed that you didn't. |
15:01 |
Dyrcona |
I don't really use that part of the system and don't code on it. |
15:01 |
Dyrcona |
s/too/to/... :) |
15:02 |
Dyrcona |
My IRCitis is acting up again. :) |
15:02 |
Dyrcona |
The records won't show up in the OPAC until you get the copies added. |
15:03 |
ssieb |
There's a queue action for import all records, but any options I pick there don't make it happen.. |
15:03 |
Dyrcona |
They're already imported all the way if they show up in search. |
15:04 |
ssieb |
They don't show up in the user catalog search |
15:05 |
Dyrcona |
Right. They need one of three things to show up there: copies, URLs, or the source needs to be "transcendent." |
15:05 |
ssieb |
What do you mean by that last bit? |
15:06 |
ssieb |
The source is an export from a different library management system. |
15:06 |
ssieb |
All the necessary fields are in it, I checked that. |
15:07 |
Dyrcona |
Evergreen has "sources" for bibs. You can configure these. It comes with 3 by default. You can add more. |
15:07 |
Dyrcona |
You can use it to organize the records in a logical manner. |
15:07 |
Dyrcona |
Typical examples of sources will separate the record by vendor, OCLC or Midwest Tapes, etc. |
15:08 |
Dyrcona |
I'm not really sure what you need to do import the records through the normal import and have copies created. |
15:10 |
dbwells |
ssieb: Are you importing copies via the 852, or a different field? |
15:11 |
ssieb |
I just deleted the queue so I'm not sure now. |
15:11 |
ssieb |
But I'm now wondering if I'm specifying the wrong record type. |
15:11 |
ssieb |
I left it at the default of Bibliographic record, but maybe it should be Acquisitions record? |
15:12 |
dbwells |
No, Bib record is right. |
15:12 |
ssieb |
ok |
15:12 |
ssieb |
I think it's in the 852 field |
15:13 |
dbwells |
Holdings import can be tricky to get right, but you should get some useful error messages in the queue if your data is off or incomplete. |
15:14 |
ssieb |
There were no error messages that I could find. |
15:14 |
ssieb |
All records imported successfully. I just can't find any way to turn them into items. |
15:16 |
dbwells |
The item creation in the MARC importer only happens at load time, so if something goes wrong, the records will simply import with no items attached. There isn't a separate step to load the items (unless you then go and create them by hand, of course). |
15:17 |
dbwells |
"at load time" meaning when you load the records into the queue, not when you load them into the catalog (which can optionally happen later). |
15:17 |
Dyrcona |
ssieb: That second link that dbs shared earlier tells you how to setup an import profile for copy information. |
15:18 |
dbwells |
If you inspect the queue, you should see Items in Queue. If that shows a number, you have passed the first hurdle of getting the records to "stage" into the importer. |
15:19 |
ssieb |
Yes, I think that might have been where I messed up. But the records went right into the database, so I don't know about any step for that. |
15:19 |
ssieb |
Right, there were no items in the queue, just records. So that was my concern. |
15:19 |
ssieb |
Now I think I need to reset the database to get rid of the first import attempt. |
15:20 |
dbwells |
ssieb: If you don't select anything in the "Record Import Actions" section, the records/items will only load into the queue, not the catalog. That might be a good place to start to make sure you are getting everything into the queue you expect. |
15:21 |
dbwells |
Once a workflow is established, I think it is fairly common to do those Import/Merge options right at load time just to save a step. |
15:23 |
ssieb |
I selected the "import non-matching records" option under "record import actions" because without that, nothing seemed to even get into the queue. |
15:24 |
dbwells |
That option should not affect what gets into the queue. If it does, that sounds like a bug. |
15:30 |
ssieb |
I just got a different result! |
15:30 |
ssieb |
records in queue, items in queue, but all item import failures :-) |
15:31 |
csharp |
I can confirm that reverting ede7e789 solves the closed date display issue - not sure what the problem is but that's the cause |
15:31 |
pinesol_green |
csharp: [evergreen|Bill Erickson] webstaff: browser client: Remove closed dates editor XUL-y requirements - <http://git.evergreen-ils.org/?p=Evergreen.git;a=commit;h=ede7e78> |
15:32 |
csharp |
I don't know if webby would still work without that commit, so caveat emptor |
15:33 |
ssieb |
"invalid circulation modifier". At least I'm getting closer to it working. Looks like it's the import profile that's the problem. |
15:34 |
Dyrcona |
ssieb: You need to create the circulation modifiers in Evergreen before you can import records that use them. |
15:34 |
Dyrcona |
s/records/copies/ |
15:34 |
ssieb |
oh! |
15:35 |
Dyrcona |
That's under the Admin menu somewhere. |
15:36 |
mmorgan |
Admin -> Server Administration -> Circulation Modifiers |
15:36 |
Dyrcona |
csharp: Look at the diff for that commit, I don't see what the problem is, unless it is the removal of the JSAN module and util.date. |
15:36 |
ssieb |
found it, thanks. Looks like I have a bit more work to do on this... |
15:37 |
ssieb |
Not as easy as I was hoping it would be... :-/ |
15:38 |
Dyrcona |
csharp: So very likely some timezone conversion (or lack thereof) going on in the default JavaScript date object. |
15:41 |
dbwells |
ssieb: Circ modifiers are not necessary for valid items. If you don't need them for whatever you are trying to do, you could probably get away with simply removing the Circulation Modifier subfield setting from your Holdings Import Profile. |
15:46 |
ssieb |
ok, but it does seem to be useful information if I can make it work |
15:48 |
Dyrcona |
ssieb: You don't need them, but it makes configuring circ and holds rules easier if you have them. :) |
15:49 |
ssieb |
actually, it looks like that information isn't in the source |
15:53 |
|
kmlussier joined #evergreen |
15:57 |
kmlussier |
dbwells / ssieb: Unfortunately, you do need a circ modifier if importing items through the staff client. It's a bug. bug 1423750 |
15:57 |
pinesol_green |
Launchpad bug 1423750 in Evergreen "vandelay: importing items without circulation modifiers always fails" [Medium,New] https://launchpad.net/bugs/1423750 |
15:58 |
dbwells |
bummer |
15:59 |
ssieb |
Part of my problem is also that I'm not a librarian, so I don't understand a lot of the terms. |
16:00 |
kmlussier |
dbs: The LITA idea is interesting. I had been thinking of doing something specifically through the Evergreen community, but doing a larger mentorship program for library-related open source projects might be a possibility too. |
16:10 |
|
TARA joined #evergreen |
16:16 |
* Dyrcona |
just ate a tortilla chip shaped like Vermont. Maybe I should have saved it? :) |
16:16 |
Dyrcona |
All right. It's late to ask this question, but I will anwyay. |
16:16 |
Dyrcona |
I want to start clark kent on the training server and it has a copy of production. |
16:17 |
Dyrcona |
I set recur to false on all of the reports. |
16:17 |
Dyrcona |
Is that all I need to do to prevent any surprise reports firing off? |
16:17 |
Dyrcona |
We want to test some reports manually. |
16:19 |
jeff |
also check for any reports scheduled to run which have not run. |
16:19 |
* jeff |
refreshes memory on what columns he means |
16:20 |
Dyrcona |
jeff: I was just going to aks. |
16:20 |
Dyrcona |
ask, even. |
16:20 |
dbwells |
yes, what jeff says |
16:20 |
dbwells |
Anything with a run_time in the future. |
16:20 |
dbwells |
in reporter.schedule |
16:20 |
Dyrcona |
I thought that there was something like that. |
16:21 |
Dyrcona |
thanks! |
16:22 |
ssieb |
kmlussier: Is there any workaround to the circulation modifier? The source data doesn't have one. |
16:23 |
kmlussier |
ssieb: Not that I know of. But circulation modifiers are an Evergreen thing. The source data could have something similar that uses a different name. Like 'item type.' |
16:23 |
kmlussier |
ssieb: Alternatively, you could create a 'default' circulation modifier in Evergreen, and then add it to your 852 field with a program like MarcEdit. |
16:24 |
ssieb |
It also doesn't seem to be picking up the barcode and that is set correctly. |
16:25 |
ssieb |
There might be something in a different field, but the import profile assumes it's all in one field. |
16:26 |
Dyrcona |
Nothing with a run date in the future. |
16:33 |
dbwells |
Dyrcona: Just curious, did you disable the recurrance via the interface, or the DB? |
16:34 |
Dyrcona |
dbwells: The DB. |
16:35 |
Dyrcona |
Turns out Clark was already running, anyway. |
16:35 |
kmlussier |
ssieb: I've only had knowledge of a few systems, but, in my experience, the holdings information is usually in one field. The field itself may vary from system to system, as well as the subfields used particular pieces of data. But, like I said, my experience is limited. |
16:36 |
dbs |
kmlussier: easy for me to come up with ideas, then run away and hope someone else implements them; the hard part is actually making it happen |
16:36 |
Dyrcona |
ssieb: The records are in a format called MARC. I has numeric fields and usually alphabetic subfields. So everything will be in the same field, but different pieces will be in different subfields. |
16:37 |
ssieb |
yes, I'm aware of that. I've learned far more about this stuff than I really wanted to. :-) |
16:37 |
Dyrcona |
:) That seems to be what usually happens. |
16:37 |
kmlussier |
ssieb: Welcome to the club! :) |
16:39 |
dbwells |
Dyrcona: afaik, future run_time 's in reporter.schedule are the mechanism for getting stuff to recur (a report runs, then reschedule's itself), so perhaps anything scheduled harmlessly ran long ago :) |
16:39 |
ssieb |
It's not showing that it finds the barcode, but maybe that's because that would be attached to the items and those are failing on import. |
16:40 |
Dyrcona |
ssieb: That's what I suspect. |
16:40 |
kmlussier |
ssieb: Yes, so if the circ modifier stops the items from importing, then you won't be able to import any of that item information. |
16:40 |
Dyrcona |
kmlussier mentioned a program called MARCEDit earlier. You can use it to make batches to a file full of MARC records. |
16:40 |
dbwells |
Dyrcona: now action_triggers in dev, there's the real trouble :) |
16:41 |
Dyrcona |
You could use it to add the same circ modifier to all of the records and then reimport it. |
16:41 |
Dyrcona |
dbwells: Not if you redirect all of your server's email to /dev/null. ;) |
16:41 |
ssieb |
yes, I found a couple of programs last night for manipulating MARC files and I remember seeing that one |
16:43 |
ssieb |
oh, that was the one I was using :-) |
16:43 |
dbwells |
Dyrcona: reminds me of the time my predecessor announced our "new branch opening" to the campus :o |
16:43 |
Dyrcona |
:) |
16:44 |
Dyrcona |
Well, I've done enough damage for one day. |
16:44 |
Dyrcona |
I'm signing out. |
16:45 |
ssieb |
I don't think there's any way I'm going to be able to match the previously entered records to another import, so it's time to reset everything... |
17:05 |
kmlussier |
ssieb: If the items never imported with those records, you should be able to do another import using the default merge profile and matching on a field that has a unique value in it. |
17:05 |
kmlussier |
When the match is found, the system should add the holdings to those records without making a change to the MARC fields in the existing record. |
17:06 |
|
mmorgan left #evergreen |
17:11 |
ssieb |
kmlussier: ok, I'll try that when I get the import working. The docs say you should be able to use static text in the import profile, but what is the syntax for that? |
17:12 |
ssieb |
That seems like a good workaround for the circ modifier issue. |
17:14 |
csharp |
@later tell Dyrcona the line that's causing the date change is "return date_obj.toISOString().replace(/T.*/,''); // == %F" |
17:14 |
pinesol_green |
csharp: The operation succeeded. |
17:15 |
kmlussier |
ssieb: That static text needs to match existing circulation modifiers already created in Evergreen. My recollection is that it needs to match the code. |
17:16 |
* kmlussier |
hears the dinner bell and quickly makes her escape. |
17:16 |
kmlussier |
Have a nice night #evergreen! |
17:17 |
csharp |
(for anyone watching now) I did some console logging - first substitution is date_obj and the second is date_obj.toISOString().replace(/T.*/,'') - looks like it's transforming dates with a time of 23:59 into the next day |
17:17 |
csharp |
Date is now Thu Sep 08 2016 23:59:59 GMT-0400 (EDT). Now transform to 2016-09-09 |
17:18 |
csharp |
and with that, I walk away too |
17:19 |
|
jvwoolf1 left #evergreen |
17:33 |
|
_adb left #evergreen |
17:33 |
|
bmills joined #evergreen |
18:10 |
|
StomproJosh joined #evergreen |
18:57 |
ssieb |
This is so frustrating. The docs say something is possible, but there is no example and the obvious way doesn't work! |
18:57 |
ssieb |
And the code is OPAQUE... :-/ |
19:05 |
ssieb |
There aren't even any useful error messages that I can find. |
19:31 |
|
rlefaive joined #evergreen |
23:12 |
|
dcook joined #evergreen |