Evergreen ILS Website

IRC log for #evergreen, 2014-02-10

| Channels | #evergreen index | Today | | Search | Google Search | Plain-Text | summary | Join Webchat

All times shown according to the server's local time.

Time Nick Message
00:35 shadowspar joined #evergreen
05:23 timlaptop joined #evergreen
07:50 rjackson-isl joined #evergreen
08:00 csharp oof - a cataloger on a mac just hit bug 1075262
08:00 pinesol_green Launchpad bug 1075262 in Evergreen "volume/copy creator bug "error in stash and close, ancn_id = undefined"" (affected: 1, heat: 6) [Undecided,Confirmed] https://launchpad.net/bugs/1075262
08:00 csharp we were thinking it was a mac thing (and it may be), but I'm able to recreate it
08:02 misilot joined #evergreen
08:02 collum joined #evergreen
08:04 sseng joined #evergreen
08:20 kbeswick joined #evergreen
08:32 csharp hmm... on closer reading of that bug report (bug 1075262), the error is the same, but the cause isn't - our top-level org is id 1
08:32 pinesol_green Launchpad bug 1075262 in Evergreen "volume/copy creator bug "error in stash and close, ancn_id = undefined"" (affected: 1, heat: 6) [Undecided,Confirmed] https://launchpad.net/bugs/1075262
08:33 csharp and since we're not hearing reports of this from windows users (or anyone else, for that matter), I'll go back to my working assumption that something about the Mac OS X platform is causing the error.
08:45 ericar joined #evergreen
08:54 jl- morning
08:56 mmorgan joined #evergreen
09:01 dluch joined #evergreen
09:11 AaronZ-PLS joined #evergreen
09:15 RoganH joined #evergreen
09:15 csharp hmm looks like what we're seeing is more like bug 782495
09:15 pinesol_green Launchpad bug 782495 in Evergreen "Add Batch Volume Produces Error" (affected: 1, heat: 6) [Undecided,Invalid] https://launchpad.net/bugs/782495 - Assigned to Jason Etheridge (phasefx)
09:15 csharp I wonder if Shae was using a Mac
09:16 phasefx doubtful
09:16 csharp our initial reaction was like phasefx's - we were unable to reproduce
09:18 csharp phasefx: thanks
09:19 csharp we're actually pulling our "we don't support Mac OS X, use Windows" card, but if it was something others are seeing with a viable workaround, I was going to offer that
09:19 kbeswick joined #evergreen
09:22 * csharp is seeing a wintry mix of weather advisories and closure notices
09:22 csharp @weather 30345
09:22 pinesol_green csharp: The current temperature in Lakeside, Atlanta, Georgia is 45.0°F (9:15 AM EST on February 10, 2014). Conditions: Overcast. Humidity: 81%. Dew Point: 39.2°F. Windchill: 44.6°F. Pressure: 30.12 in 1020 hPa (Rising).  Winter Weather Advisory in effect from 7 PM this evening to 7 PM EST Tuesday...
09:24 AaronZ-PLS Settings storage question...
09:24 AaronZ-PLS We have a library who managed to scan a barcode into Staff client/Patron details/Bills/Receipt Options/Number of Copies.
09:24 AaronZ-PLS Where does EG store that setting in 2.2.1?
09:24 AaronZ-PLS Theoretically, they could change it by hitting the down button (you cant type into the field), but hitting the down button more than 52000000000000 times seems somewhat over the top.
09:24 AaronZ-PLS I tested it myself and I cant type in that box, but I can scan in some barcodes, ie:"US060TF7535903BCABD9" (PN from a Dell DVD) works, but "A00" (revision # from the same DVD) doesnt.
09:25 phasefx AaronZ-PLS: try pulling up Admin -> For Developers -> about:config, filter on persist, and scroll through the results
09:27 phasefx filtering on receipt_upon_payment should get you closer
09:27 AaronZ-PLS phasefx: Thanks. Looks like its oils_persist_evergreen.owwl.org_​bill2.xul_num_of_receipts_value
09:27 * csharp can't fine where AaronZ-PLS is referring to...
09:27 csharp s/fine/find/
09:28 phasefx AaronZ-PLS: oops, yeah that one, not receipt_upon_payment :)
09:28 csharp I'm in a patron account
09:29 * csharp is interested, because if something *can* go wrong, it *will* go wrong in PINES
09:29 csharp it's like we're the original Murphy's Law testbed for Evergreen
09:29 AaronZ-PLS csharp: Go to Bills, then in the bottom right corner click on the Receipt Options menue and look for the Number of Copies box
09:29 csharp AaronZ-PLS: gotcha - thanks
09:29 AaronZ-PLS Erm, menu, not menue...
09:36 mllewellyn joined #evergreen
09:38 yboston joined #evergreen
09:44 csharp AaronZ-PLS: so did library discover this when their client started to endless print copies of the receipt?
09:44 csharp s/endless/endlessly/
09:44 csharp tis a morning for typos
09:45 AaronZ-PLS Here is an odd one. Why would the about:config option be greyed out? Screenshot: http://snag.gy/wG3nU.jpg
09:45 AaronZ-PLS Logged in as myself, happens when running EG as either the "normal" staff user, or as administrator
09:45 AaronZ-PLS csharp: Yes
09:46 phasefx AaronZ-PLS: you need the DEBUG_CLIENT permission
09:46 AaronZ-PLS phasefx: Whats odd is that it works for me logged into my workstation, but not on the library's workstation
09:47 csharp different working locations?
09:47 AaronZ-PLS And it works logged into EG on my computer as the library's EG circ user
09:47 phasefx AaronZ-PLS: I'm not sure if it's a global permission or one that can be affected by working locations; if the latter, then you may need to increase the depth granted
09:47 AaronZ-PLS Same EG working location
09:49 AaronZ-PLS Must have been permission related. Logging in as admin let me fix it
09:53 ericar_ joined #evergreen
09:53 AaronZ-PLS phasefx: Thanks again
09:54 phasefx you're welcome
10:20 misilot left #evergreen
10:21 mrpeters joined #evergreen
10:24 konro joined #evergreen
10:46 fredp_ joined #evergreen
10:52 Dyrcona joined #evergreen
11:24 finnx joined #evergreen
11:56 jboyer-isl I assume that if there's a discrepancy in the config.metabib_field and config.metabib_search_alias tables vs the seed data, the prudent course of action is to just dump them and reload? They haven't been customized.
11:57 jboyer-isl I ask because our personal author definition has been made into another corporate author entry because of an upgrade that changed how corp authors are defined, and it specifically updates an id.
12:01 jboyer-isl Actually, looking at our current db the author entries are completely screwed up anyway, so as far as I'm concerned drop and reload is the plan now. :/
12:04 Dyrcona jboyer-isl: Actually, that sounds like you have some customizations from before the rows existed in stock.
12:12 jboyer-isl Comparing the 2 versions of our db and the seed data, we're missing the alternative title entry, and all ids after 4 are off-by-one. It's going to be a whole new system after this weekend's reingest. D:
12:12 jboyer-isl (because I was wrong about the search_alias table, it is the same as the seed data)
12:16 sseng_current joined #evergreen
12:20 jboyer-isl Well, it's not entirely screwed up, but the alternative title was shifted 11 ids down, so up to 11 fields are potentially messed up in config.metabib_search_alias.
12:22 Bmagic|2 joined #evergreen
12:25 csharp static_ids--
12:26 afterl joined #evergreen
12:27 dconnor_ joined #evergreen
12:29 timlaptop joined #evergreen
12:34 jboyer-isl If they were really static I wouldn't have a problem with them. :(
12:37 Dyrcona Yeah, the real problem is using a sequence there. You have to reserve values by setting the sequence to start at 1000 or so.
12:41 bshum joined #evergreen
12:43 jboyer-isl Ah, well. I'll wait until after lunch to see how far this ripples.
12:46 Dyrcona There are a number of foreign key relationships with that table that you need to worry about.
12:48 Dyrcona We had 1 custom row and ran something like this to fix it.
12:49 pastebot "Dyrcona" at 64.57.241.14 pasted "config.metabib_field_fix.sql" (40 lines) at http://paste.evergreen-ils.org/18
12:50 Dyrcona The alter table and setval bits were part of an upgrade script at some point, IANM.
12:51 * Dyrcona alway forgets one of the two "I"s in IIANM: If I Am Not Mistaken.
13:00 csharp I AM NOT MISTAKEN!
13:01 csharp jboyer-isl: yeah - my decrement was imprecise... what I really don't like is that specific serial IDs matter at all to the application.  Not sure what the alternatives are, but that has bitten PINES many times in the past :-/
13:05 Dyrcona Well, in most cases, the serial ids don't actually matter to the application, and I think that is also the case with with config.metabib_field.
13:05 Dyrcona The seed scripts and upgrade scripts are just usually written with hard coded id values.
13:06 Dyrcona Our system was working just fine before and after running the script that I pasted.
13:08 Dyrcona I guess the problem is treating serial fields as if they contain magic values.
13:08 Dyrcona Or putting magic values into serial fields.
13:11 gsams joined #evergreen
13:31 jboyer-isl Dyrcona: Well that's no good. I was hoping to get out of this with only a few tables, but I guess that's not happening.
13:31 bshum I never heard of config.metabib_search_alias
13:31 bshum But I assume that the shifted IDs is because there's been multiple attempts at moving custom metabib indexes into the 100, later 1000+ ID range
13:32 bshum And setting back up the "stock" index space
13:33 jboyer-isl Worse yet, there are some tables that should NOT be changed. config.metabib_search_alias is "correct," for instance, because it's seed data just specifies id, and it currently matches what's expected. All of the tables defined by "select * from config.metabib_field where" WOULD need to be changed. :(
13:34 bshum I don't think I understand what that means.
13:34 jboyer-isl bshum: The shifted entry was one of the seed entries, just in the wrong order. Apparently they were inserted with DEFAULT ids and one was forgotten for a while.
13:35 jboyer-isl bshum: Some tables just have the id numbers in the seed data INSERTs, others are generated by doing a select on config.metabib_field. Or is that not the confusing part?
13:38 Dyrcona jboyer-isl: If you can all of the FK relationships to CASCADE, the tables will pick up the changes when you adjust IDs.
13:39 jboyer-isl Drycona: do you remember around what version you ran that script? Starting with that will be a big help.
13:41 bshum jboyer-isl: I think the thing I was trying to understand is how you ended up with the wrong ID numbers for certain seed entries.  To do that, either local editing or missed upgrades would have played at it, I guess.
13:42 bshum I know in our case, during our last major upgrade, I had to fiddle with ours because we were missing one of the newer entries during an interim time where master was changing with the addition of new stock indexes.
13:42 bshum But we've never deviated too far from stock
13:43 bshum Though I didn't realize what Dyrcona is saying about FK relationships.  That sounds... messier than I thought it was.
13:44 Dyrcona jboyer-isl: Before our last update in December, so 2.5+ish.
13:45 Rish joined #evergreen
13:46 Rish joined #evergreen
13:47 Recky joined #evergreen
13:47 jboyer-isl Good to hear. I'm going to try it against our test database and see how hard the cleanup will be.
13:48 jboyer-isl Edited to correct our situation, obviously.
13:50 jeff monitoring and logs... can you ever have enough?
13:50 jeff good panel discussion.
13:52 jeff everybody introduces themselves, everybody agrees you can never have enough logs, monitoring, automation, or testing (or automated testing, or automated monitoring (because really, should there be any other kind?)), panel discussion over.
13:52 dbs Entire panel automatically monitored and logged by the NSA
13:53 jboyer-isl jeff: I think you're forgetting the part where everyone argues that 75% of what the other panelists are doing misses critical issues.
13:53 jeff jboyer-isl: that's the post-panel discussion, over everyone's choice of drink.
13:54 jeff jboyer-isl: besides -- what is the answer to monitoring/testing/logging/automating the wrong thing? the answer is monitoring/testing/logging/automating MORE THINGS!
13:54 dbs And then everyone goes off to build a new integrated automatic log monitoring system (IALMS)
13:57 jboyer-isl Ugh. Not a fan of log monitoring. I'd rather poke at live state. (I say this having just developed a local feature that necessitates log monitoring...)
13:57 jeff what local feature?
14:00 jboyer-isl Using a payment processor I like so little I'm not submitting the code to core. There's a chance that if an authtoken is lost in memcache that the payment can still proceed but we don't know where to apply it. So something has to scan our osrfsys logs for SIREN.GIF and start sending urgent emails if that happens.
14:00 jboyer-isl The moral of the story is test and use Stripe. :)
14:01 jboyer-isl Just noticed I left out a "that" in that explanation. Suffice it to say that due to circumstances beyond my control, using a different provider is out.
14:01 tsbere That sounds possibly problematic. What if the gif isn't requested due to other things (like someone turning off images)?
14:01 jboyer-isl but that's no reason anyone else should use it.
14:01 jeff oof.
14:01 jboyer-isl No, no. It literally prints "SIREN.GIF" in the log files. A joke just for me.
14:02 tsbere ahh
14:02 jboyer-isl along with an explanation of how to potentially fix the problem
14:02 tsbere ok
14:03 * tsbere would probably have done something along the lines of "new table to put payment and what it should be applied to with that processor" instead of relying on memcache
14:03 csharp so if I set a closed date for tomorrow, and the fine generator runs tonight, skipping my library, and then I end up opening tomorrow and remove the closed dates, will the fine generator include "back" fines for the date that was skipped?
14:03 csharp (when it runs tomorrow night, that is)?
14:04 tsbere csharp: In theory, yes
14:04 csharp our libraries want to err on the side of staying open
14:04 jboyer-isl tsbere: I don't doubt at all that there are better ways to do it. But I've made sure our timeout is almost double what there's is, so unless someone makes a payment just as a memcache machine dies it shouldn't actually be a problem, just gross.
14:04 csharp but they also don't want fine generation if they end up closing, which results in manual fine voiding
14:04 tsbere csharp: If the fine generator tacks fines on after a closed date, though, the closed date won't get filled in after that point if removed
14:05 csharp I see
14:05 csharp so they'd need to remove the fines before the next run
14:05 csharp s/fines/closings/
14:05 tsbere yea
14:05 csharp ok - that's how I thought it worked.
14:05 csharp thankks
14:05 csharp tsbere++
14:06 tsbere csharp: Note I haven't *tested* any of this, this is based on my knowledge of the fine generator and some assumptions. ;)
14:14 jl- I'm trying to bulk import in /test/datasets/sql with a 700mb file (17 million lines) but I get
14:14 jl- psql:bibs_concerto.sql:17580472: ERROR:  out of memory
14:14 jl- DETAIL:  Failed on request of size 268435456.
14:15 jl- any suggestions? I'm gonna give the VM more memory but it already has 4 GB
14:16 Dyrcona 4GB is nothing these days.
14:16 Dyrcona Evergreen is barely usable with no data in about 2GB.
14:17 jl- I'm hoping that this means my import file has been sanitized correctly
14:17 jl- and that it's jsut a memory issue
14:17 Dyrcona jl-: No idea. It means you need more RAM to find out.
14:17 Dyrcona Are you actually loading concerto or did you mangle the concerto file to load your own?
14:18 jl- the latter
14:18 jl- the concerto file is now ~1GB
14:21 jl- I've formatted all / to be // and all ' to be ''
14:21 jl- s/\\/\
14:22 * Dyrcona eyes the leaning toothpicks warily.
14:26 jl- psql:bibs_concerto.sql:17580472: connection to server was lost
14:26 jl- :/
14:27 jl- any ideas dbs or Dyrcona
14:28 Dyrcona Are you running the database and everything on the vm?
14:28 jl- yes
14:28 Dyrcona Don't.
14:29 Dyrcona Practically speaking, though, how much RAM did you give the VM?
14:29 dbs Might be worthwhile monitoring what RAM usage looks like while you're loading
14:29 dbs e.g. what's using up all the RAM
14:29 dbs And checking the postgresql logs for complaints :)
14:29 fparks joined #evergreen
14:31 dbs Might need to load the bibs in batches of 1000 or something like that, hard to tell from here
14:31 jl- I gave the VM 6 GB ram right now, memory does not seem to be an issue tho because it doesn't complain of a memory issue
14:31 jl- I'm going to try to split the file in half and try again
14:32 jl- dbs you mentioned the other day that I only need to escape the single quotes , I did this by making all ' into '' and all \ into \\
14:32 jl- is that viable?
14:32 dbs IIRC that's basically what we did in the concerto bibs
14:33 dbs One of the reasons we use a staging table is so that we can dissect (and skip) records that go wrong when they're imported into the actual biblio.record_entry table
14:34 dbs As well as enabling us to easily do something like "INSERT INTO biblio.record_entry(marc, xact_id) SELECT marc, xact_id FROM staging_table WHERE id < 1000;" for importing chunks at a time
14:35 jl- hmm
14:35 jl- yes the process seems to be very efficient
14:37 tsbere_ joined #evergreen
14:38 jl- hm so we have 17 million lines
14:38 jl- I wonder to how many records that amounts to
14:38 jl- let's see if I can count the string <record> with sed or ex
14:40 jl- grep -c "<record" bibs_concerto.sql
14:40 jl- 238613
14:40 jl- Oo
14:43 Dyrcona XML is a tool for making data explode.
14:44 jl- yes I'm learning more about ILS than I had ever hoped for
14:44 jl- I've been marathoning evergreen for 3 days now
14:48 dbs jl-++
14:49 jl- dbs any tool you recommend for chunking?
14:49 jl- or gist
14:50 dbs Taking a single MARC file and splitting it up?
14:50 dbs yaz-marcdump has decent options
14:50 dbs it has pretty horrible man pages, sadly
14:51 dbs but roughly "yaz-marcdump -i marcxml -o marcxml -s <filename_prefix> -C <chunksize> <inputfile>"
14:51 dbs where <chunksize> would be 1000, or whatever
14:55 jl- dbs no I have a 1GB file with 238613 </rercord> tags
14:57 ericar joined #evergreen
15:02 kmlussier joined #evergreen
15:19 * Dyrcona mutters several times under his breath: "Somebody doesn't understand something. It is not my duty to correct them."
15:27 phasefx http://xkcd.com/386/
15:27 Dyrcona :)
15:28 Dyrcona I always think of that one when something like this comes up.
15:31 Bmagic|2 Has anyone else looked at item details for a newly catalogged item and observed a "Total Circs" > 0 ?
15:32 * phasefx can imagine something like that happening if folks are migrating data and don't reset their sequences
15:33 Bmagic|2 phasefx: which sequences would cause that?
15:33 phasefx (and if they're not using the sequence when migrating the data in the first place)
15:34 phasefx Bmagic|2: do you have direct access to the database?
15:34 phasefx sequences were a gut response, but narrowing it down to something specific, my imagination fails me
15:34 Bmagic|2 phasefx: yes
15:35 Bmagic|2 phasefx: I think I could find the trail to the answer if I knew which table(s) that the staff client used to come up with that number in the item details screen
15:36 * phasefx would look at the actual circs where target_copy = the item, and at extend_reporter.legacy_circ_count, where id = the item
15:36 phasefx actual circs = action.circulation
15:36 Bmagic|2 phasefx: 0 rows for that asset.copy
15:38 Bmagic|2 phasefx: Thanks! it was extend_reporter.legacy_circ_count
15:39 Bmagic|2 So now, this means that there was a phantom asset.copy with that ID in the past?
15:42 jl- I made a post on the chunking if anyone is able to chip in http://stackoverflow.com/questions/21686​893/split-xml-file-into-chunks-after-tag
15:43 phasefx Bmagic|2: you're welcome.  extend_reporter.legacy_circ_count would only ever get populated through a migration; it's likely a mistake was made at some point
15:44 Bmagic|2 phasefx: I see, I appreciate it!
15:46 phasefx jl-: you could use the perl module MARC::Record to read in the file and spit it out chunked.  Would give you a heads-up on problematic MARC in the process
15:51 jeff or yaz-marcdump as dbs recommended earlier.
15:52 jl- phasefx as far as I can tell I already sanitized it to be sucked into the bibs_ machine in /tests/datasets/sql
15:52 jl- I just get a connection lost at the end
15:52 jl- atmittidly it's 17 million lines of records
15:52 jl- about 200 000 records
15:54 phasefx the Equinox migration_tools repository has a perl script called marc_cleanup (or something similar); it'll spit out one xml record per line; you could then use the split command on it
15:55 jl- can I specifiy how many records per file?
15:55 jl- or will I end up with 200 000 files
15:55 phasefx split -l num_of_lines_per_file
15:56 jl- phasefx the catch is that it shouldn't split beore a </record> tag
15:57 phasefx that's why you need one record per line; split splits on lines
15:57 Dyrcona jl-: Are you trying to split the original MARC or the sql file?
15:57 jl- hmm
15:57 jl- Dyrcona I can go either way
15:58 jl- I made a script that will convert the MARC -> bibs_sql format
15:58 Dyrcona What dbs said earlier about yaz_marcdump will work beautifully on the MARC file.
15:58 phasefx looks like yaz-marcdump has chunking options
15:59 jeff at the risk of having it said three times, use yaz-marcdump on the marcxml file to break it into files of 1000 (or whatever) records each, then put those through whatever process you did with the original marcxml file containing all the records -- then try, and see what results you get.
15:59 phasefx there is power in saying something three times :)
15:59 Dyrcona Yes, it makes *It* appear....
16:00 jl- jeff seems like the way to go
16:00 jeff but keep in mind that the evergreen documentation has information in importing and migrating, and that this whole path might be the wrong way to go. i make zero assertions. i'm just recommending a reasonable way of splitting a large file of marc data into smaller files. :-)
16:01 jl- I actually only discovered the bibs_ loader by accident other methods did not work for me
16:01 jl- and only because of dbs
16:01 * Dyrcona has found that every migration is unique and there is no one size fits all that works every time.
16:02 phasefx army of cataloging monks works almost everytime :)
16:25 jl- goodnight
16:54 mrpeters left #evergreen
16:54 gdunbar joined #evergreen
16:56 jeff well huh.
16:56 afterl left #evergreen
16:56 jeff i have a hold that was cancelled Feb 7 but is no longer cancelled, yet I can find no record in logs of it being uncancelled.
16:57 Dyrcona Maybe it wasn't canceled?
16:58 sseng_ joined #evergreen
17:00 dbs The cancellation has been cancelleed
17:08 jeff logs confirm it was cancelled, staff member confirms it was uncancelled. hold state in db is consistent with uncancellation.
17:08 * jeff looks closer
17:11 jeff (i now have a time, user, and workstation)
17:16 mmorgan left #evergreen
17:26 dcook joined #evergreen
17:31 ericar_ joined #evergreen
17:41 RBecker joined #evergreen
17:44 snowball_ joined #evergreen
17:53 jl- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
17:53 jl- xsi:schemaLocation="http://www.loc.gov/MARC21/slim
17:53 jl- http://www.loc.gov/standards/​marcxml/schema/MARC21slim.xsd"
17:53 jl- this means it's marc21 not marcxml right?
17:53 jl- I'm also a bit thrown off by the /slim
17:53 j_scott joined #evergreen
18:07 dbwells jl-: If you've got an XML namespace (xmlns), you have an XML record.  This is a MARCXML record.  MARC21 is really an umbrella term which specifies not only its own binary format (MARC (2709)), but also all the field numbering and similar such details.
18:11 jl- thx
18:11 dbwells In effect, as stated in the MARCXML homepage title on the LOC website, MARCXML -> "MARC 21 XML Schema"
18:21 jl- <leader>00687cam a2200229 a 45  </leader> < this indicates utf-8
18:22 dbwells Yes
20:23 finnx joined #evergreen
20:38 jboyer-isl joined #evergreen
20:39 remingtron joined #evergreen
20:39 dbwells joined #evergreen
21:11 dac joined #evergreen
22:04 jeff joined #evergreen
22:05 jeff joined #evergreen
22:07 ldwhalen joined #evergreen
22:12 ldwhalen joined #evergreen
22:25 dbs berick: I'm firing up a testing vm for the wcag commits. I bet you felt dirty letting that 4-image border for the fine box go through :)
22:47 dbs Hmm. On Chrome I just deleted the offending divs and it didn't change the look one bit
22:47 * dbs tests further

| Channels | #evergreen index | Today | | Search | Google Search | Plain-Text | summary | Join Webchat