Evergreen ILS Website

IRC log for #evergreen, 2014-12-29

| Channels | #evergreen index | Today | | Search | Google Search | Plain-Text | summary | Join Webchat

All times shown according to the server's local time.

Time Nick Message
02:58 b_bonner_ joined #evergreen
02:59 jeffdavi1 joined #evergreen
05:09 pinesol_green Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html>
07:30 rjackson-isl joined #evergreen
07:37 jboyer-isl joined #evergreen
07:46 ericar joined #evergreen
07:57 TaraC joined #evergreen
08:09 ericar_ joined #evergreen
08:14 ericar_ joined #evergreen
08:15 ericar left #evergreen
08:25 graced joined #evergreen
08:27 mrpeters joined #evergreen
08:41 abowling joined #evergreen
08:43 mmorgan1 left #evergreen
08:47 mmorgan joined #evergreen
08:47 Shae joined #evergreen
08:58 phasefx joined #evergreen
08:58 dbwells joined #evergreen
09:00 Dyrcona joined #evergreen
09:01 * Dyrcona is using the guest wireless, 'cause the laptop fails to authenticate with the staff wireless, and it does not get an IP address when plugged in to ethernet.
09:02 Dyrcona The latter worked on Saturday.
09:02 Dyrcona I sometimes think we've made a huge mistake with this technology that only works sometimes.
09:03 dbs Gimme a good old pen and paper!
09:03 Dyrcona The wifi isn't a surprise when you see the 24 other hotspots within range.
09:03 dbs (he said, looking at 44,000 records to update for various reasons of ugliness. most of those reasons being MARC)
09:04 dbs hah, yeah, I remember the days when our IT services people would roam around with wifi detectors looking for rogue routers that were interfering with the main campus reception
09:04 * Dyrcona loaded 21 million plus Apache access log lines into a database table over the weekend. Yay. Fun.
09:04 dbs now that's all gone to hell
09:04 Dyrcona The failure to get an IP over the ethernet bothers me, but not enough to look into it right now.
09:05 Dyrcona We're still having crashes related to running out of cstore drones.
09:05 Dyrcona Had one Friday and again Saturday not almost 24 hours apart.
09:05 Dyrcona "not almost"  bah.....
09:06 Dyrcona Like twenty three and a half hours.
09:06 Dyrcona I've decided that I need more information, so my main priority this morning is to setup a process to log the numbers of all drones and the load on the Evergreen server for each minute of the day.
09:07 sarabee joined #evergreen
09:07 Dyrcona It might help to see if a bunch of some other drone also gets spawned as those could be blocking on cstore.
09:10 Dyrcona Oops. Looks like we had problems yesterday, too, but no one called me about it.
09:10 Dyrcona I got emails from icinga, guess I should have checked it from home.
09:11 Dyrcona So, I can expect it again in an hour and a half to two hours from now.
09:15 RoganH joined #evergreen
09:15 dbs Dyrcona: since we upgraded to 2.7.2 from 2.4-ish, we ran out of drones twice. But that was because the database crashed because it ran out of disk space. Can't blame _that_ on evergreen.
09:16 Dyrcona True, but we've got plenty of disk space.
09:26 dbs That is wise :)
09:38 bshum Dyrcona: pong
09:38 * bshum is getting slower
09:39 dbs bshum: look for a firmware upgrade for yourself
09:39 * dbs suggests coffee but knows that's not bshum's thing
09:40 bshum Heh
09:40 * bshum query dbs
09:40 bshum Oops
09:48 BigRig joined #evergreen
09:52 Dyrcona heh.
09:52 * Dyrcona was away from the desk.
09:53 Dyrcona bshum: It wasn't important, or at least, it is no longer important. Was just gonna bother you about drones again. :)
09:53 bshum Heh
09:53 bshum Yeah, I getcha :(
09:53 bshum I wish I could corroborate some of your weirdness but we don't seem to have issues like that in our environment. Yet.
09:57 bshum dbs: The other day I was musing about those deleted bibs and thinking it would be cool to create some way of mapping merges to redirect people to the new lead record if the old one is merged/deleted
09:58 dbs bshum: that would be a nice enhancement, yeah
09:58 bshum That way if someone adds an external link to a single bib (liking it on Facebook, pinning it, whatever), they won't be stuck
09:58 dbs so then we could issue a 301 with a Location: header
09:59 bshum Hmmm
10:00 bshum Where could we store that info? Maybe a new column on biblio.record_entry?
10:01 dbs bshum: that, or a separate table (to avoid redirect -> redirect -> redirect after several successive merges)
10:01 bshum dbs: ack, good point
10:02 Dyrcona Ooh. We just hit 200 drones a couple of minutes ago, and now we're at 196.....
10:02 Dyrcona I better get crackin' on my new stats program.
10:03 dbs Dyrcona: man, I totally feel for you. we used to hit that back in the early 2.1-ish days I think
10:03 dbs Dyrcona: no pg backends pinning the database CPU for long periods of time?
10:06 Dyrcona No, not according to PgAdmin's server status.
10:06 Dyrcona Which just looks at pg_stat_acitivity and the locks table anyway.
10:07 Dyrcona Number of drones is dropping.
10:34 Dyrcona One interesting thing: I see a number of what appear to be the same queries in the locks table when the drones hit 200 and the load starts to creep upward.
10:34 Dyrcona But, they disappear within a couple of seconds.
10:42 dreuther joined #evergreen
10:49 dreuther joined #evergreen
10:54 mllewellyn joined #evergreen
11:00 vlewis joined #evergreen
11:08 maryj joined #evergreen
12:08 dbwells joined #evergreen
12:41 collum joined #evergreen
13:08 jihpringle joined #evergreen
13:10 jboyer-isl Dyrcona, Here it appears to depend on the workload. Our SIP and OPAC servers have reasonable/smallish cstore counts, but our Utility server (Action Triggers, Holds, and Fine Gen) is hitting that wall hard and often.
13:12 Dyrcona jboyer-isl: Don't know if you were paying attention a couple of weeks ago, but we suddenly started having real problems after upgrading to 2.7.2 on December 7. Before that, we got along just fine with max cstore drones at 100.
13:13 Dyrcona Since the upgrade 200 is not enough.
13:13 Dyrcona I'm trying to figure out why.
13:14 Dyrcona What I find very interesting is that we've occasionally hit 200 in the middle of the night on Sunday, when all of our libraries are closed.
13:16 jihpringle_ joined #evergreen
13:17 jboyer_isl joined #evergreen
13:21 jeffdavis joined #evergreen
13:28 vlewis_ joined #evergreen
13:28 mllewellyn1 joined #evergreen
13:31 dcook__ joined #evergreen
14:07 jboyer-isl joined #evergreen
14:40 jboyer-isl eeevil++
14:42 jboyer-isl We switched to Multiplexed SIP Servers this morning, we've gone from hitting the max_server limit to serving all of our clients with 45 processes. I may even be able to go down to a single server at this rate.
14:47 jboyer-isl Oh, hey, look at who spoke too soon, now that we have SIP tickets coming in. :-/ Anyway, the memory usage is still better.
14:51 bshum Heh
14:51 bshum jboyer-isl++ # live feedback :)
15:06 bshum berick: Ooooo, bug 1406367 sounds intriguing :D
15:06 pinesol_green Launchpad bug 1406367 in Evergreen "Fine generator can use a lot of memory" (affected: 1, heat: 6) [Undecided,New] https://launchpad.net/bugs/1406367 - Assigned to Bill Erickson (erickson-esilibrary)
15:08 Dyrcona Yeah, I just saw that email from launchpad.
15:09 berick bshum: have you noticed high mem. usage locally?
15:10 bshum berick: I actually haven't monitored it that closely. Though the server that runs fines also runs A/T events and we do allocate a lot of memory to the server to avoid running out of room in general.
15:10 bshum In the past, I know we've run out of memory before, so we've always kept it at a very high number anyways.
15:11 berick bshum: gotcha.  yeah, we get mem alerts occasionally.  didn't think much about it until i saw what happened on the test server.
15:15 Dyrcona I've never gotten memory alerts, but I know it can take a long time to generate fines on a dev/test server that has been kept up to date.
15:15 Dyrcona Should be "has not been kept up to date."
15:16 bshum The idea of skipping the $0 stuff sounds like a really good thing to me.
15:16 bshum I know that ends up being tons for us
15:16 berick yeah, that should be an easy win.
15:16 Dyrcona Same for us, about half of our libraries do not charge fines.
15:16 berick oh, wow
15:17 berick that could be huge, then
15:17 bshum Sounds about right.
15:17 bshum I think we're at least that.  Plus staff/teachers, etc.
15:21 jeff i still enjoy the tale of our migration where the fine generator kicked off and generated $0 bills for every day of every open circ in the system, then got to the current day for each circ and THEN marked MAXFINES.
15:21 jeff so we had millions of $0 bills for a long time. :-)
15:21 jeff the client did not like retrieving bills on those patrons.
15:22 jeff and/or REALLY did not like retrieving details on said bills.
15:25 berick jeff: was that.. 1.2?
15:26 jeff 1.2.0.4 -- first version with Hold Capture Verify and Rental support. :-)
15:27 jeff November 2008.
15:29 berick nice
15:55 Dyrcona Well, I threw that branch up on my development server, I'll have to figure out how to test it later.
15:56 Dyrcona berick++
15:56 berick Dyrcona++ sweet
16:52 pinesol_green Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html>
17:08 mmorgan left #evergreen
18:07 cfarley joined #evergreen
18:10 cfarley Hello, I am having problem and hoping that someone has run into it before.  I am running 2.6.4 and when I try to place an item on hold through the opac, nothing happens.
18:11 cfarley I don't get an error message, it just reloads the page, blanking the phone number field.
18:11 cfarley Any ideas would be appreciated.  Thank you.
18:14 jeffdavis cfarley: Any errors in your opensrf or Apache logs?
18:15 jihpringle_ cfarley: it could be an issue with restrictions on the phone number field in the OPAC.
18:15 jihpringle_ I think we had some issues because the phone numbers in the database hadn't been entered in the format the OPAC expected
23:19 dcook__ joined #evergreen

| Channels | #evergreen index | Today | | Search | Google Search | Plain-Text | summary | Join Webchat