Time |
Nick |
Message |
06:14 |
|
mtate joined #evergreen |
06:14 |
|
eeevil joined #evergreen |
06:15 |
|
maryj joined #evergreen |
06:15 |
|
phasefx joined #evergreen |
06:15 |
|
BigRig joined #evergreen |
06:15 |
|
Callender joined #evergreen |
06:15 |
|
TaraC joined #evergreen |
06:16 |
|
graced joined #evergreen |
07:57 |
|
rjackson-isl joined #evergreen |
08:18 |
|
akilsdonk joined #evergreen |
08:20 |
|
Shae joined #evergreen |
08:22 |
|
ericar joined #evergreen |
08:26 |
|
mrpeters joined #evergreen |
08:31 |
kmlussier |
Happy Friday #evergreen! |
08:36 |
|
sbrylander joined #evergreen |
08:37 |
|
abowling joined #evergreen |
08:40 |
|
mmorgan joined #evergreen |
08:45 |
|
collum joined #evergreen |
08:59 |
|
ericar_ joined #evergreen |
09:08 |
|
Dyrcona joined #evergreen |
09:11 |
* Dyrcona |
switches hot spots. Wifi on my laptop is all messed up. |
09:12 |
|
Dyrcona1 joined #evergreen |
09:24 |
jeff |
kmlussier: happy Friday! |
09:38 |
bshum |
@hates |
09:38 |
pinesol_green |
bshum hates metarecord holds; acquisitions; questions about acquisitions; z39.50; acq; notices; action triggers; marc_export; hold pull lists; drupal; holds; RDA; edi; parts holds; kpac; SIP; parts; marc; RAID; Launchpad; edi troubleshooting; reports; serials; apostrophes in search; server power failures; zimbra; office printer; weird barcodes; PO JEDI; B&T; using the phone; horizontal summary (1 more message) |
09:38 |
bshum |
@more |
09:38 |
pinesol_green |
bshum: display; vertical summary display; backporting; A/T; custom reporter stuff; dojo; and snow |
09:38 |
bshum |
Oh good it's there already. |
09:38 |
kmlussier |
@loves bshum |
09:39 |
pinesol_green |
bshum loves Evergreen; chocolate chip cookies; git; tpac; yaous; rain; piwik; lunch; Fridays; cake; pizza; Star Trek; pgadmin; donuts; autoupdate; quassel; kvm; roulette; and PostgreSQL magic |
09:39 |
bshum |
snow-- |
09:39 |
kmlussier |
bshum: you need more love and less hate in your life. ;) |
09:39 |
bshum |
@love snow days |
09:39 |
pinesol_green |
bshum: The operation succeeded. bshum loves snow days. |
09:40 |
kmlussier |
@loves |
09:40 |
pinesol_green |
kmlussier loves parts; YAOUS; Fridays; clam chowder; coffee; new fanged email thing; quassel; magic eightball; trivia; Evergreeners; BBQ; spell check; mobile catalog; new edit links in the catalog; vim; and pizza |
09:40 |
kmlussier |
@hates |
09:40 |
pinesol_green |
kmlussier hates git; Launchpad search; Internet Explorer; snow; scheduling meetings; and Starbucks |
09:41 |
jeff |
i get a chuckle out of "new fanged" |
09:41 |
kmlussier |
@hate negative balances |
09:41 |
pinesol_green |
kmlussier: The operation succeeded. kmlussier hates negative balances. |
09:41 |
jeff |
seems to imply that the email has grown teeth or something. |
09:41 |
kmlussier |
I've received email with teeth. |
09:42 |
Dyrcona |
heh |
09:44 |
jeff |
Content-Type: teeth/fangs |
09:44 |
kmlussier |
I was trying to remember the context under which I had added that love. Apparently, it came from RoganH http://irc.evergreen-ils.org/evergreen/2014-02-03#i_65979 |
09:49 |
bshum |
@dontcare server power failures |
09:49 |
pinesol_green |
bshum: The operation succeeded. bshum no longer hates server power failures. |
09:50 |
|
julialima_ joined #evergreen |
09:50 |
|
Stompro joined #evergreen |
09:54 |
bshum |
@sarcasticlove parts |
09:54 |
pinesol_green |
bshum: But bshum already hates parts! |
09:57 |
kmlussier |
bshum: You don't hate server power failures? |
09:58 |
bshum |
Not today anyways. |
09:58 |
|
remingtron joined #evergreen |
10:32 |
csharp |
wow - so postgres replication is not at all scary to setup - nowhere near as complicated as slony |
10:32 |
csharp |
I'm running it in some test VMs - so far so good |
10:33 |
kmlussier |
csharp: Is that something you're thinking of doing in production? |
10:33 |
bshum |
csharp: It is pretty awesome. |
10:33 |
kmlussier |
I know it's something that OmniTI recommended in their report. |
10:36 |
csharp |
kmlussier: yes, beginning next weekend |
10:37 |
csharp |
slony is such a PITA, I'm happy to see a straightforward setup |
10:37 |
csharp |
*and* there's automatic failover, which is not possible with slony |
10:38 |
csharp |
still learning, but it's nice to start the day with a success ;-) |
10:38 |
bshum |
It was always my understanding that Slony's big deal was that it allowed you to pick and choose which bits to replicate? |
10:38 |
bshum |
vs. innate replication which seems to just do the whole thing |
10:38 |
csharp |
yes, that's the *remaining* advantage of slony |
10:38 |
csharp |
that mattered before we had three identical servers with lots of disk space |
10:39 |
bshum |
@love new hardware |
10:39 |
pinesol_green |
bshum: The operation succeeded. bshum loves new hardware. |
10:39 |
bshum |
@love new toys |
10:39 |
pinesol_green |
bshum: The operation succeeded. bshum loves new toys. |
10:39 |
kmlussier |
csharp: Keep us posted on how things go. |
10:39 |
berick |
hm, just occured to me we need to add bzr to the packager prereqs |
10:39 |
bshum |
Hmm |
10:40 |
kmlussier |
bshum: I'm happy to see you're following up on my advice from earlier. :) |
10:40 |
eeevil |
csharp: that, and the fact that long-running queries with either be killed or stall replication (and risk filling disk with WAL) |
10:40 |
bshum |
berick: For translation sync right? |
10:40 |
eeevil |
where "that" is "the remaining advantage of slony" |
10:40 |
berick |
bshum: yah |
10:42 |
csharp |
eeevil: oh - good to know |
10:42 |
* csharp |
wonders how often that will be an issue |
10:44 |
csharp |
eeevil: on the ESI-administered locations, are you using slony or built-in? |
10:44 |
eeevil |
csharp: killing queries? any long running report is at risk |
10:45 |
eeevil |
and, both, for different purposes |
10:45 |
csharp |
oh - hmm |
10:45 |
eeevil |
there are good reasons (and use cases) for both |
10:45 |
csharp |
we regularly have reports that run 2+ hours, even on the new hardwar |
10:45 |
csharp |
e |
10:46 |
eeevil |
the timeout is configurable, and probably infinite by default |
10:46 |
eeevil |
but a long running query also keeps the master from doing some cleanup |
10:47 |
eeevil |
or did, at least ... not sure if that is fixed in a release yet or now |
10:47 |
dbs |
pretty sure those early issues were fixed relatively early too |
10:47 |
* csharp |
is doing more rtfm work |
10:48 |
eeevil |
dbs: I hope so :) |
10:49 |
eeevil |
there were a spate of bad replication-related data-loss bugs, IIRC, about 9-12 months ago, too ... those were scarier, but fixed relatively rapidly |
10:53 |
dbs |
yep |
10:54 |
csharp |
dbs: so you're running native PG replication? |
10:55 |
kmlussier |
@marc 024 |
10:55 |
pinesol_green |
kmlussier: A standard number or code published on an item which cannot be accommodated in another field (e.g., field 020 (International Standard Book Number), 022 (International Standard Serial Number) , and 027 (Standard Technical Report Number)). The type of standard number or code is identified in the first indicator position or in subfield $2 (Source of number or code). (Repeatable) (1 more message) |
10:55 |
csharp |
marc-- |
10:56 |
kmlussier |
@karma marc |
10:56 |
pinesol_green |
kmlussier: Karma for "marc" has been increased 0 times and decreased 26 times for a total karma of -26. |
11:01 |
jboyer-isl |
I missed all of the Postgres replication talk, but what version of Pg are you running csharp? |
11:01 |
csharp |
jboyer-isl: 9.3 |
11:02 |
csharp |
reading up on cascading replication (asynchronous only) vs. synchronous replication |
11:04 |
jboyer-isl |
That's probably for the best. We're on 9.1 and after having one of our 2 replicants fall off the end for a second time I'm going back to Slony until at least 9.3 or 9.4. :/ (in 9.1 you had to guess how many WAL segments your replicants might lag; I think in 9.3 or .4 the master won't through away files that slaves still need) |
11:04 |
berick |
bshum: dbwells: FYI started on http://wiki.evergreen-ils.org/doku.php?id=dev:release_process:evergreen:2.8 |
11:05 |
csharp |
jboyer-isl: wow - well that's the kind of real-world-usage information I'm after |
11:06 |
bshum |
berick++ # sweet |
11:06 |
jboyer-isl |
There is a lot of traffic along the link that this replicant was using, but even with 6000+ WAL segments being kept it didn't make it 2 weeks. Identical hardware that sits on the same switch is still keeping up with a much faster master. |
11:06 |
* berick |
is done editing for now |
11:09 |
csharp |
jboyer-isl: interesting |
11:09 |
dbs |
csharp: yes, we were when we were hosted by UoGuelph, for years. always asynchronous, and primarily for disaster recovery and offline reads. |
11:09 |
dbs |
You really don't want to do synchronous. |
11:09 |
csharp |
dbs: ok - good to know |
11:10 |
csharp |
our ideal setup would be cascading |
11:10 |
csharp |
1 to 2 to 3 |
11:10 |
* dbs |
likes marc 024 for URIs |
11:11 |
jboyer-isl |
csharp, I didn't think you could do that. If there's a recovery.conf in place I didn't think the server would start any wal_senders. Is that something new in 9.2 or .3? |
11:12 |
dbs |
hey berick, here's a fun old one: python oils.utils is hardcoded to speak http, so you can't talk to an https-only server |
11:12 |
dbs |
9.2 IIRC is where cascading started to be introduced |
11:12 |
jboyer-isl |
Ah, I see. Never mind then, I'm just living in the past. |
11:13 |
csharp |
jboyer-isl: http://www.postgresql.org/docs/current/interactive/warm-standby.html#CASCADING-REPLICATION |
11:13 |
* csharp |
is actually reading the 9.3 version of that page |
11:13 |
berick |
dbs: ah, osrf/gateway.py. yeah, that's dumb |
11:14 |
dbs |
bingo :) |
11:14 |
bshum |
berick: Cool, I'm poking a bit to add in some bits from the google doc |
11:15 |
berick |
bshum++ |
11:16 |
jboyer-isl |
csharp, I was going to say, careful reading current; you might make yourself sad when $NEW_HOTNESS isn't actually available on your version. :D (happened to me recently. :/ ) |
11:16 |
csharp |
heh |
11:17 |
|
vlewis joined #evergreen |
11:26 |
* Stompro |
slowly eats popcorn as postgres replication discussion goes on. Better than the hobbit movies. |
11:29 |
kmlussier |
Stompro: You and I have different ideas of what entertainment is. :) |
11:29 |
kmlussier |
@weather 02771 |
11:29 |
pinesol_green |
kmlussier: The current temperature in Rumford, East Providence, Rhode Island is 24.6°F (11:25 AM EST on January 09, 2015). Conditions: Light Snow. Humidity: 91%. Dew Point: 23.0°F. Windchill: 17.6°F. Pressure: 29.99 in 1016 hPa (Steady). |
11:30 |
* kmlussier |
enjoys watching the snow lightly fall from her front window. Very pretty. |
11:31 |
Stompro |
@weather 58102 |
11:31 |
pinesol_green |
Stompro: The current temperature in Fargo, North Dakota is -0.4°F (9:53 AM CST on January 09, 2015). Conditions: Clear. Humidity: 63%. Dew Point: -9.4°F. Windchill: -18.4°F. Pressure: 30.42 in 1030 hPa (Rising). Wind Chill Advisory in effect until noon CST today... |
11:31 |
csharp |
weather 30033 |
11:31 |
csharp |
@weather 30033 |
11:31 |
pinesol_green |
csharp: The current temperature in Leafmore, Decatur, Georgia is 33.1°F (11:31 AM EST on January 09, 2015). Conditions: Clear. Humidity: 43%. Dew Point: 12.2°F. Windchill: 33.8°F. Pressure: 30.41 in 1030 hPa (Rising). |
11:31 |
|
ericar_ joined #evergreen |
11:38 |
Dyrcona |
ooh load is 100. |
11:43 |
csharp |
yowza |
11:44 |
mrpeters |
@weather 46060 |
11:45 |
mrpeters |
so cold it wont even tell you :) |
11:45 |
pinesol_green |
mrpeters: The current temperature in Downtown Noblesville, Noblesville, Indiana is 12.7°F (11:45 AM EST on January 09, 2015). Conditions: Scattered Clouds. Humidity: 38%. Dew Point: -7.6°F. Windchill: 1.4°F. Pressure: 30.32 in 1027 hPa (Rising). Wind Chill Advisory in effect until 7 am EST Saturday... |
11:45 |
Dyrcona |
load is back down to 10 or so, but ejabberd is using 200% cpu. |
11:46 |
bshum |
Man, you guys cannot seem to catch a break :( |
11:46 |
Dyrcona |
And, giving me this: I(<0.9788.0>:ejabberd_listener:293) : (#Port<0.2785>) Failed TCP accept: emfile |
11:46 |
mrpeters |
oh man this winter has been kind |
11:46 |
Dyrcona |
Which means it can't open new files. |
11:46 |
mrpeters |
we have only had 2 snows, and this is the first bitter cold |
11:47 |
mrpeters |
this time last year we were close to breaking snowfall records, so im not complaining too much |
11:48 |
Dyrcona |
mrpeters: similar here. |
11:48 |
Dyrcona |
though maybe not record snow fall last year, but we had over a week of bitter cold right around new year last year. |
11:49 |
mrpeters |
im staying on the positive side, we're only about 6 weeks from 50's being the norm |
11:50 |
mrpeters |
we usually start having warm days and tornadoes by the time march madness is wrapping up |
11:50 |
Dyrcona |
I don't get whey ejabberd says it can't open files. |
11:50 |
Dyrcona |
We only have 61662 open file handles. |
11:50 |
Dyrcona |
On the whole system. |
11:51 |
Dyrcona |
ejabberd only has 601 open. |
11:51 |
Dyrcona |
opensrf has 52587 |
11:51 |
Dyrcona |
And the number of open files is dropping. |
11:52 |
Dyrcona |
We didn't hit 200 cstore drones this time, either. |
12:04 |
Dyrcona |
Don't suppose I could just restart ejabberd and no one would notice. ;) |
12:05 |
|
AliceR joined #evergreen |
12:11 |
jboyer-isl |
Dyrcona, If it really did hit the max file limit something would probably have died and caused potentially tons of file handles to close. Have you looked in that machine's syslog for 'Too many open files' ? |
12:11 |
jboyer-isl |
The best part about hitting that limit is that you never know who's going to win the shutdown lottery! |
12:25 |
Dyrcona |
jboyer-isl: No "Too many open files" messages in syslog and kern.log is dated yesterday sometime. |
12:26 |
jboyer-isl |
I suppose that would have been too easy. :/ |
12:27 |
Dyrcona |
Most of syslog are messages from postfix that ought to just be in mail.log, but I'm not gonna fix that right now. |
12:34 |
Dyrcona |
Weird: at 11:35:26 I have ejabberd accepted a connection. |
12:38 |
Dyrcona |
After that all ejabberd log messages are the emfile error. |
12:38 |
Dyrcona |
Despite ejabberd not accepting new connections things seem to be working for now. |
12:43 |
|
juneo joined #evergreen |
12:43 |
Dyrcona |
I think I should really learn more about erlang. |
12:44 |
|
jihpringle joined #evergreen |
12:48 |
Dyrcona |
The typical answer of increasing ulimit can't be correct, since ulimit -n for ejabberd is 65535 and ejabberd only has 624 files open. |
12:50 |
Dyrcona |
Looks like we changed it across the board, not just on a user by user basis. |
13:00 |
|
julialima joined #evergreen |
13:28 |
eeevil |
Dyrcona: is pam respecting configured limits for both login and su? the latter is needed when startup scripts su to the ejabberd user, I believe (see: http://metajack.im/2008/09/23/file-descriptors-are-yummy-or-common-pitfalls-of-ejabberd/ ) |
13:31 |
Dyrcona |
eeevil: Thanks. I skipped that one, because I saw something about ERL_MAX_PORTS. |
13:32 |
Dyrcona |
eeevil: After looking at pam, login does respect limits, but su doesn't, and that could be our problem since the startup script does su when starting ejabberd. |
13:32 |
Dyrcona |
eeevil++ |
13:32 |
Dyrcona |
I used to like pam, but now I don't. :) |
13:33 |
eeevil |
heh |
13:39 |
Dyrcona |
What I'm not finding is what happens if the limits for all uses is set to 65535 and the line is commented out in su. Will pam use some built-in limit? |
13:42 |
jeff |
even if i thought i knew the answer, i'd test by doing something like echoing the output of uname -a into a file from within an init script that uses su |
13:42 |
jeff |
(from a command run with su from an init script, that is) |
13:48 |
Dyrcona |
Here's a little something from the Debian wiki: Note that pam_limits is not used in /etc/pam.d/common-session and /etc/pam.d/common-session-noninteractive, so it won't be active for daemons |
13:51 |
Dyrcona |
Dunno if that applies here for certain, since the init script does use su. |
13:51 |
Dyrcona |
Dunno if setting in either of those files would do anything. |
13:54 |
* dbs |
wonders if we have a version of reporter.super_simple_record with non-normalized author / title / etc yet |
13:55 |
* Dyrcona |
doesn't. |
13:56 |
* dbs |
should probably just create a view based on a MODS-ified version of a record for adhoc query/report purposes |
13:56 |
csharp |
dbs: we don't, that I know of. our libraries want that, though |
13:56 |
csharp |
it would be very welcome |
13:56 |
* dbs |
hatesesss the normalization :) |
13:56 |
dbs |
(I understand why it's there, but yeah, need an alternative) |
13:57 |
berick |
*cough* metabib display fields |
13:57 |
dbs |
okay. will see what I can put together for the humans |
13:57 |
csharp |
it's been a pet peeve - nothing to rise to the top of anyone's wish list |
13:57 |
dbs |
yet |
13:57 |
dbs |
yeah |
13:58 |
Dyrcona |
Now, after having all this conversation about pam and limits, I swear that I did see ejabberd have more than 1024 files open earlier today. |
13:58 |
Dyrcona |
However, using tmux, I don't have any console scroll back. |
14:00 |
kmlussier |
dbs: We would love to see non-normalized fields. |
14:01 |
|
finnx joined #evergreen |
14:02 |
kmlussier |
It's why I was so excited for display fields, but I guess that's not ready yet. |
14:06 |
Dyrcona |
su doesn't appear to be the problem. |
14:06 |
Dyrcona |
Without or without pam_limits.so enabled, I get the right limit with just su. |
14:13 |
bshum |
Does anyone use clark-kent.pl with concurrency other than 1? I'm trying to remember how to have clark handle multiple report running at once, but it's been too many years since I last tried it out. |
14:16 |
berick |
bshum: pretty sure changing the concurrency value is all you have to do |
14:20 |
bshum |
Okay, so something like... "clark-kent.pl --concurrency=3 --daemon" should keep around up to 3 reports at once |
14:20 |
* bshum |
tries it and sees what explodes |
14:21 |
dbs |
bshum: we've been doing concurrency 2 for years |
14:22 |
csharp |
PINES uses '-c 12' nowadays |
14:27 |
* jeff |
idly wonders if anyone has run a report in the last year other than a few legacy ones he's run by hand |
14:32 |
|
Tracy joined #evergreen |
14:36 |
|
Tracy_ joined #evergreen |
14:37 |
Tracy_ |
Hello. I don't have much experience with chat rooms, but am very pleased to be here and attending this MARC training. |
14:38 |
dbs |
Welcome Tracy_ |
14:38 |
jeff |
Tracy_: greetings! |
14:39 |
jeff |
Tracy_: are you in a training session where someone mentioned this channel? |
14:40 |
Tracy |
Yes. I'm from the Lake County Library and am a new hire for Technical Services Assistant. |
14:42 |
jeff |
congratulations! |
14:43 |
Tracy_ |
I don't have any questions for the homework this week. But am logging in for the first time to get acquainted with the chat room and to fulfill my assignment of logging in at least once per week! |
14:44 |
Tracy_ |
Thank you, Jeff. |
14:47 |
Tracy_ |
Looking forward to learning much and getting acquainted with everyone. Have a good weekend! |
14:48 |
bshum |
Tracy_: Welcome aboard! In case it interests you, there's a cataloger discussion mailing list that some Evergreeners use |
14:48 |
Tracy_ |
Yes, please. That would be helpful. |
14:48 |
bshum |
Occasionally some of them are in IRC here too, but you may find more interesting topics discussed on those email threads |
14:48 |
bshum |
http://list.evergreen-ils.org/cgi-bin/mailman/listinfo/evergreen-catalogers |
14:50 |
Tracy_ |
Got it. Thanks! |
15:07 |
kmlussier |
I don't know anything about the MARC class, but I love the instructor who asked her students to get acquainted with #evergreen. |
15:07 |
|
Sally joined #evergreen |
15:09 |
bshum |
It does give you that warm fuzzy feeling inside doesn't it? |
15:09 |
* dbs |
suspects that the display_field approach may benefit significantly from postgresql 9.3 and LATERAL |
15:10 |
kmlussier |
#evergreen often gives me a warm fuzzy feeling inside. |
15:11 |
bshum |
dbs: Does that mean that we should make a hard requirement to use PG 9.3 for 2.8+ ? (I already feel like PG 9.3 should have been the base version for 2.7, but anyways....) |
15:11 |
dbs |
otherwise, pre-9.3 approach seems likely to be another materialized table populated by triggers. could be just my little brain not dealing well with attempts to harness biblio.extract_metabib_field_entry(bibid) for the powers of good |
15:12 |
dbs |
bshum: 9.1 end of life is September 2016 |
15:13 |
bshum |
Oh, that'll line up nicely with 3.0 then ;) |
15:13 |
dbs |
(per http://www.postgresql.org/support/versioning/) - 9.3 minimum + dropping script-based circ + newfangled websocketry all sounds like a good thing to get in place for an upcoming release |
15:13 |
dbs |
2.10? :) |
15:14 |
bshum |
Nooooooo |
15:14 |
bshum |
:) |
15:15 |
bshum |
I imagine that 3.0 would be a good marketing concept for the day we can officially cut the cord between the XUL and web based staff clients. |
15:15 |
* bshum |
would buy that t-shirt |
15:16 |
* dbs |
would buy that t-shirt, wear it, then give a lightning talk where he rips it off Hulk Hogan-style to reveal a "XUL FOREVER" shirt underneath. |
15:16 |
dbs |
Then rip that one off to reveal "I kid, I kid" under that |
15:16 |
bshum |
dbs++ |
15:16 |
bshum |
Haha |
15:18 |
* bshum |
would go to that lightning talk :) |
15:19 |
mtate |
dbs++ |
15:24 |
kmlussier |
dbs++ |
15:24 |
kmlussier |
Are display fields doable for 2.8? |
15:32 |
dbs |
kmlussier: some elements of them would be. not sure what the whole desired scope is |
15:33 |
kmlussier |
dbs: Nor do I. |
15:48 |
eeevil |
dbs / kmlussier: re display fields, the full scope would be TPAC + replace MVR + reporting (and likely more), so probably not all for 2.8, but I think TPAC and reporting might be |
15:50 |
eeevil |
dbs: and, IIRC, I think display fields could (is planned to?) reuse browse_entry, since the data in there is focused on patron display normalization already, to avoid another copy of /all/ the data |
15:50 |
eeevil |
but, going from memory on that second part |
15:51 |
kmlussier |
eeevil: What would display fields do for TPAC? I think I've asked this question before, but I don't remember the answer now. |
15:52 |
jeff |
for one thing, they could eliminate the need for tpac to parse the MARCXML blob for every item on every search result page. |
15:53 |
berick |
kmlussier: remove all the xpath cruft from the templates and just display pre-cacluated values from the DB for record fields |
15:53 |
kmlussier |
berick: Gotcha! Thanks |
15:53 |
jeff |
(unless you wanted to override without re-ingest, in which case the marcxml would still be available to you) |
15:53 |
berick |
what jeff said, in an ideal world, but completetly getting rid of all xpath/parsing would be a significant job, i think |
15:53 |
berick |
jeff: that, too |
15:58 |
kmlussier |
I wish I understood reports better. |
15:58 |
bshum |
kmlussier: Be careful what you wish for. |
15:59 |
kmlussier |
When you use the "in list" operator on a filter, why is it that you sometimes are presented with a list of defined values when you create the report, but other times you have to enter the text? |
16:01 |
bshum |
Maybe it's a goof in the fieldmapper? |
16:01 |
bshum |
And it doesn't know where to get the list from |
16:02 |
kmlussier |
For example, I can get a nice list of org unit IDS to choose from or a list of fund codes to choose from. But if I want to filter by PO state, I need to manually enter the text. |
16:02 |
Stompro |
Should there be a post on evergreen-ils.org and something to the list serves about EG 2.7.2, 2.6.5 and Opensrf 2.4.0? The last release post is for 2.7.1, 2.6.4 and 2.5.8. I've been out for a month and I'm just trying to caught up. |
16:03 |
jeff |
back when we used to run reports that way (okay, I'll stop), I seem to recall two common causes: a lack of proper linkage in the fieldmapper, or selecting the "wrong" element in the report template editor. |
16:03 |
bshum |
Stompro: In theory, we should probably blog more. In practice, that isn't always the case. |
16:04 |
berick |
eeevil: IIRC, there was talk of using browse_entry for display fields, but no action. |
16:04 |
berick |
all talk, no walk |
16:04 |
kmlussier |
jeff: OK, so maybe I need to find the "right" element. |
16:05 |
bshum |
kmlussier: What is PO State related to? (is that an actual state, like CT?) |
16:05 |
berick |
kmlussier: PO states do not come from a list maintained in the DB. |
16:05 |
eeevil |
sorry, looked away ... /me reads up |
16:05 |
kmlussier |
I'm copying a report from a reports guru where the same thing happens, so I suspect the cause is the former not the latter. |
16:05 |
jeff |
ah, then there'd you go. berick has the true reason in this case. |
16:05 |
|
julialima left #evergreen |
16:06 |
kmlussier |
berick: Really? I always assumed there was because there was a limited number of states that could be used. |
16:06 |
kmlussier |
berick: And the same is true for lineitem states? |
16:06 |
kmlussier |
bshum: State as in pending, on order, complete, etc. |
16:06 |
bshum |
Oh state, got it now... |
16:06 |
bshum |
:) |
16:07 |
berick |
kmlussier: same for lineitems. |
16:07 |
kmlussier |
OK, then. I won't worry about it and move on. At least I now know the answer to a question that has been bugging me for a while. :) |
16:08 |
kmlussier |
Actually, I lied about moving on. |
16:08 |
berick |
kmlussier++ |
16:08 |
kmlussier |
berick: So that would explain why, when you do an acquisitions search, you can't select a state from a nice dropdown box? That's something that's been on our wishlist for a while. |
16:09 |
berick |
kmlussier: yep |
16:09 |
berick |
should be possible to put a custom state-picker widget into the search form, though |
16:10 |
kmlussier |
berick: Possible as in hours and hours of development or possible as in an easy tweak? |
16:10 |
berick |
hm, somewhere in the middle |
16:10 |
kmlussier |
berick: Good to know for the next time it comes up. Thanks! |
16:20 |
bshum |
Stompro++ # proactive :D |
16:22 |
kmlussier |
Stompro++ |
16:24 |
jboyer-isl |
kmlussier, depends on if the field is a primary key or not. |
16:24 |
jboyer-isl |
HEY look who can't scroll. |
16:26 |
kmlussier |
:) |
16:27 |
dbs |
Fruits of labor: http://goo.gl/CEj1F7 (no non-normalized publisher info yet, config.metabib_field is likely to grow if we really want to centralize this stuff) |
16:27 |
dbs |
and then we'll get to try and deal with RDA :) |
16:28 |
dbs |
Next stop, BIBFRAME |
16:30 |
kmlussier |
dbs++ |
17:05 |
pinesol_green |
Incoming from qatests: Test Failure - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
17:12 |
dbs |
Archive::Zip-- |
17:13 |
dbs |
We've been living with broken windows and ugly graffitti too long |
17:32 |
|
mmorgan left #evergreen |