Time |
Nick |
Message |
05:18 |
|
degraafk_ joined #evergreen |
06:01 |
pinesol |
News from qatests: Testing Success <http://testing.evergreen-ils.org/~live> |
08:18 |
|
mantis joined #evergreen |
08:19 |
|
Dyrcona joined #evergreen |
08:58 |
|
miker joined #evergreen |
08:58 |
|
pinesol joined #evergreen |
09:16 |
|
stephengwills joined #evergreen |
09:18 |
|
csharp__ joined #evergreen |
09:18 |
|
book` joined #evergreen |
09:23 |
Bmagic |
With the XUL client, when anything changed in actor.org_unit, autogen.sh was required and restarting Evergreen services. But is that still the case? What is everyone else doing these days? We still schedule changes to actor.org_unit at night |
09:29 |
Dyrcona |
Bmagic: My experience is that autogen.sh is still generally required. Restarting services and Apache is generally a good idea. |
09:30 |
Dyrcona |
We generally add new org units early in the morning, say 6:00 AM local time or so. |
09:30 |
Bmagic |
XUL would lose it's mind when actor.org_unit got changed. But I'm thinking that the Web based staff client might not |
09:30 |
Dyrcona |
It doesn't lose its mind, but changes don't always show up. |
09:31 |
Dyrcona |
I could be cargo culting.... |
09:37 |
Dyrcona |
I don't recall if the OPAC picks up changes right away, either. |
09:39 |
Dyrcona |
Bmagic | gmcharlt: FYI I'm messing with Pg settings on my test db server. I'm changing the ports that the different instances listen on and reviewing the optimization settings. I'm going to come up with a set of optimizations to use more of the hardware and possibly modify the "40%" settings. |
09:40 |
Dyrcona |
I'm also going to open a Lp bug and a Google doc for notes, etc. I think I'll make a folder for the project, so I can share multiple docs if necessary. |
09:50 |
|
jvwoolf joined #evergreen |
09:51 |
miker |
Bmagic: autogen's main purpose, outside of upgrades (where new classes are routinely added, and field positions change), is to flush the copy of the org tree (and related structures) from memcached. that's used by the web client just as much as it was the xul client. so, yes, you must run it after changes to the org tree that get cached there. also, there is some in-process caching in both mod_perl and certain service backends, so an apache and |
09:51 |
miker |
service restart is necessary these days. |
10:15 |
Bmagic |
miker++ Dyrcona++ # We will continue to do that stuff at night :) |
10:23 |
* Dyrcona |
scratches his head wondering how he came up with some of his production Pg settings..... |
10:26 |
Dyrcona |
I should probably adjust a couple of them. |
10:40 |
Dyrcona |
All right, upgrading to master on Pg 10 with "100%" optimization, which is more like 94%, since I left some room for the other Pg instances. |
10:41 |
Dyrcona |
Also, I'm not sure how long this machine will hold out, since one of the drives on the RAID has a flashing amber light. |
12:06 |
|
nfBurton joined #evergreen |
12:13 |
Dyrcona |
bshum++ |
12:53 |
|
sandbergja joined #evergreen |
13:08 |
jeff |
if you have a degraded array, you might want to correct that before putting effort into running tests. |
13:09 |
jeff |
though it's possible the results would be suitable for comparison with each other... maybe. |
13:12 |
Dyrcona |
jeff: I know. We replaced some drives in this machine a few years ago. I'll ask if we have any spares left over. If not, I may not have a choice at this point. |
13:13 |
Dyrcona |
If that's the case, then I'll see what other hardware we have available. We don't have anything else with as much disk space, so it would make the testing rather time consuming. |
13:14 |
jeff |
you might also be able to reconfigure the storage in a way that requires fewer physical drives. |
13:14 |
Dyrcona |
"If that's the case," meaning if we don't have spares. |
13:14 |
* jeff |
nods |
13:14 |
Dyrcona |
Meh. I hate mucking with hardware RAID. |
13:16 |
Dyrcona |
Also, I should be working on something else with a higher priority at the moment, but that's almost always the case when it comes to things like this. |
14:00 |
sandbergja |
How are folks working around bug 1901726 (that "Default Phone" required field in the patron edit screen, even when there's a daytime phone available)? |
14:00 |
pinesol |
Launchpad bug 1901726 in Evergreen "Required Fields Based on Hold Notification Preferences Too Strict" [Medium,Confirmed] https://launchpad.net/bugs/1901726 |
14:00 |
sandbergja |
We just upgraded from 3.4, and circ staff would love to be able to make changes to patron records without having to copy/paste phone numbers to make the form validation happy? |
14:11 |
Dyrcona |
sandbergja: It might be a good idea to ask that question on the general mailing list. The person at CW MARS who could answer your question doesn't come to IRC but is subscribed to the general list, afaik. |
14:16 |
sandbergja |
Dyrcona++ |
14:16 |
sandbergja |
thanks, I will |
14:18 |
|
csharp_ joined #evergreen |
14:28 |
pinesol |
[evergreen|Gina Monti] Docs: update glossary.adoc to add TLD definition per LP1837753 - <http://git.evergreen-ils.org/?p=Evergreen.git;a=commit;h=54db2f4> |
14:39 |
sandbergja |
mantis++ |
14:39 |
sandbergja |
abneiman++ |
14:52 |
pinesol |
[evergreen|Gina Monti] Docs: Update apache_rewrite_tricks.adoc to further address LP1837753 - <http://git.evergreen-ils.org/?p=Evergreen.git;a=commit;h=a239d4e> |
15:14 |
mantis |
Is there a better system than OpenKiosk for self check? We had two libraries report issues using it. |
15:15 |
abneiman |
mantis++ sandbergja++ |
15:23 |
csharp_ |
@band add Degraded Array |
15:23 |
pinesol |
csharp_: Band 'Degraded Array' added to list |
15:24 |
csharp_ |
@who is a huge fan of [band] 's top hit "Mucking Around with Hardware RAID"? |
15:24 |
pinesol |
Keith_isl is a huge fan of Possibly Unhandled Rejection 's top hit Mucking Around with Hardware RAID. |
15:25 |
|
jvwoolf joined #evergreen |
15:59 |
Keith_isl |
Good bot |
16:01 |
Keith_isl |
It's been a good while since I've done that sort of mucking about; most of the RAIDs I deal with these days are 10-20 man. |
16:05 |
Dyrcona |
Heh. |
16:05 |
Dyrcona |
I've switched to ZFS and pools. |
16:09 |
Dyrcona |
A Fighter/Wizard with a Pixie familiar and a good summons, and you can be a 1-person RAID. :) |
16:12 |
Dyrcona |
So, on a more on-topic note: We've had a couple episodes of "NOT CONNECTED TO THE NETWORK" is critical, according to Nagios today. |
16:13 |
Dyrcona |
Most of these look like "clients" going away, possibly someone closing a tab: opensrfbh2.public/_bh2_1626965772.254689_11025 IS NOT CONNECTED TO THE NETWORK!!! |
16:14 |
Dyrcona |
But some are cstore drones: opensrfbh5.private/open-ils.cstore_drone_bd2-bh5_1626963749.886461_29073 IS NOT CONNECTED TO THE NETWORK!!! |
16:15 |
Dyrcona |
Something like 1 in 5 or 1 in 7 are cstore drones, when there are any drones. Sometimes it's only clients in a given hour. |
16:18 |
Dyrcona |
It has only been cstore drones today. |
16:18 |
Dyrcona |
No other drones showing up as not connected. |
16:18 |
Dyrcona |
Think this is bug-worthy? |
16:19 |
Dyrcona |
Also, any suggestions of anything to look for in the logs? |
16:26 |
Keith_isl |
Dyrcona ++ |
16:27 |
Dyrcona |
I just ran totals for the day: 98 not connected in total, and only 8 for drones. So that's more like 1 in 12 is a drone. |
16:35 |
|
jvwoolf1 joined #evergreen |
18:00 |
pinesol |
News from qatests: Testing Success <http://testing.evergreen-ils.org/~live> |
18:05 |
|
jvwoolf1 left #evergreen |