Time |
Nick |
Message |
06:01 |
pinesol |
News from qatests: Testing Success <http://testing.evergreen-ils.org/~live> |
07:37 |
|
collum joined #evergreen |
08:22 |
|
rfrasur joined #evergreen |
08:45 |
|
mmorgan joined #evergreen |
09:28 |
|
mmorgan joined #evergreen |
09:28 |
|
rfrasur joined #evergreen |
09:28 |
|
collum joined #evergreen |
09:28 |
|
tlittle joined #evergreen |
09:28 |
|
Keith-isl joined #evergreen |
09:28 |
|
alynn26 joined #evergreen |
09:28 |
|
denials joined #evergreen |
09:28 |
|
jonadab joined #evergreen |
09:28 |
|
bshum joined #evergreen |
09:28 |
|
dluch joined #evergreen |
09:28 |
|
devted joined #evergreen |
09:28 |
|
Bmagic joined #evergreen |
09:28 |
|
book` joined #evergreen |
09:28 |
|
ejk joined #evergreen |
09:28 |
|
troy joined #evergreen |
09:28 |
|
Christineb joined #evergreen |
09:28 |
|
jeffdavis joined #evergreen |
09:28 |
|
degraafk joined #evergreen |
09:28 |
|
akilsdonk_ joined #evergreen |
09:28 |
|
rhamby_ joined #evergreen |
09:28 |
|
phasefx_ joined #evergreen |
09:28 |
|
berick joined #evergreen |
09:28 |
|
jweston joined #evergreen |
09:28 |
|
gmcharlt joined #evergreen |
09:28 |
|
pinesol joined #evergreen |
09:28 |
|
csharp_ joined #evergreen |
09:28 |
|
pastebot joined #evergreen |
09:28 |
|
JBoyer joined #evergreen |
09:28 |
|
miker joined #evergreen |
09:28 |
|
eady joined #evergreen |
09:28 |
|
eby joined #evergreen |
10:06 |
|
mantis joined #evergreen |
10:15 |
Bmagic |
With newer versions of Evergreen, I notice more hardware demand. I also find that to keep a healthy application server, we need to restart services often. Mostly to reclaim memory. 4CPU/16GB bricks seem to need it daily. Apache seems to be the largest offender. Am I doing something wrong? |
10:17 |
Bmagic |
I guess a better question is: are any of you finding yourselves doing similar maintenance to the production bricks? Daily/weekly/etc? |
10:25 |
|
jweston30 joined #evergreen |
10:28 |
Bmagic |
berick: some research led me to https://git.evergreen-ils.org/?p=OpenSRF.git;a=commitdiff;h=304365165e7ba0cc08bb6c5f0ba25f0b541fd27d. I see that patch is in OpenSRF. I'm not sure, but does the max_requests setting light up this code once the threshold is crossed? Causing the drone to finally free it's resources? |
10:28 |
pinesol |
Bmagic: [opensrf|Bill Erickson] LP#1706147 Perl Force-Recycle drone option - <http://git.evergreen-ils.org/?p=OpenSRF.git;a=commit;h=3043651> |
10:29 |
berick |
Bmagic: drones will self-destruct once they reach max requests, regardless of this patch. |
10:29 |
berick |
this patch lets the API code force a self-destruct before max-requests is reached |
10:30 |
Bmagic |
I see, so maybe* my issue is the max_request value being too high for some services? |
10:30 |
berick |
useful for resource-heavy API calls (e.g. fine generator, hold targeter, etc) |
10:30 |
berick |
Bmagic: it's possible, but if Apache is your main issue, you may want to look at Apache max_requests first |
10:31 |
Bmagic |
MaxRequestsPerChild 10000 |
10:32 |
berick |
i'm at 5000 |
10:32 |
berick |
fwiw |
10:32 |
Bmagic |
does apache do the same thing? Where an apache process dies after <MaxRequestsPerChild> ? |
10:32 |
berick |
yep |
10:33 |
Bmagic |
that sounds like something to mess with for sure. Though, I am noticing that almost all* of my max_requests in opensrf.xml are 1000 |
10:33 |
berick |
well, hm, i'm looking at a different setting.. |
10:33 |
* berick |
does a quick google |
10:34 |
berick |
ah ok, MaxRequestsPerChild is the old name for MaxConnectionsPerChild |
10:35 |
berick |
looks like both are supported, though |
10:35 |
Bmagic |
looking at: mpm_prefork.conf btw |
10:35 |
berick |
but yes lowering that value will allow apache to reclaim resources more frequently |
10:35 |
berick |
yeah |
10:35 |
Bmagic |
5000 < 10000, so, I guess tweak until it works better, ha! |
10:36 |
berick |
it's a good place to start |
10:36 |
Bmagic |
can it be a bad thing? Like, if there aren't enough requests left in the life of a PID, and a large multi-part Evergreen request chain comes in, will the interface suffer somehow? |
10:37 |
berick |
it's only a problem if the value is so low apache has to spend a noticeable amount of time/cpu cleaning and forking processes |
10:38 |
Bmagic |
after putting nginx in front, the number of simo apache threads went down dramatically. Like from 200 down to 16 |
10:39 |
berick |
depends on load, but I'm guessing you'd have to be <1000 to get into that situation |
10:39 |
berick |
yeah, nginx has been a great addition |
10:41 |
Bmagic |
but that means that those 16 threads are eating up all the memory (for me). 10000 is too high. Also, the drones play into this a bit? Because apache is brokering the connection to the drones basically? And if the drones are allowed to live for 1000 requests..... see my logic? |
10:41 |
berick |
the drones will recycle themselve regardless of any apache settings |
10:41 |
berick |
based on their own settings |
10:42 |
Bmagic |
understood. I am just thinking that 1000 is too high for most of them |
10:43 |
Bmagic |
like with pcrud, perhaps having those live for so long isn't as good as recycling them more frequently? Whereas, the seldom used drones might be better off with a higher setting? |
10:44 |
berick |
well the C apps (pcrud, cstore, etc.) use way less RAM than the Perl apps, so recycling them frequently is less critical |
10:44 |
berick |
i have both at 2k |
10:45 |
berick |
the only one I have below 1k is Vandelay (Perl, heavy lifting, etc.) |
10:46 |
berick |
1k requests happens pretty quickly |
10:46 |
Bmagic |
good info! |
10:47 |
Bmagic |
I get a little confused in some of these XML blocks where max_children is mentioned twice. One for the bare service and one for the unix_config clause |
10:47 |
Bmagic |
sorry, max_requests |
10:48 |
berick |
the one under <unix_config/> is the one you want. IIRC, the other one has to do with max_requests per connected session (which is only supported in Perl) |
10:49 |
* Bmagic |
ignores non unix_config max_requests setting |
10:50 |
Bmagic |
berick++ # real good stuff here |
10:54 |
|
Christineb joined #evergreen |
10:54 |
Bmagic |
https://docs.evergreen-ils.org/eg/docs/latest/development/intro_opensrf.html#serviceRegistration |
10:54 |
Bmagic |
There is some* documentation on the matter but nothing that spells out exactly what was discussed here |
11:31 |
jeff |
MARC records with tags 01I, 26I, 6S5... if I didn't know better, I'd wonder if there was OCR involved in a migration somewhere. :-) |
11:45 |
rhamby_ |
jeff I have wondered that myself on occasion though my favorite was subfield * |
12:27 |
|
collum joined #evergreen |
12:31 |
|
collum joined #evergreen |
14:20 |
csharp_ |
so... perl question that I'm having a hard time answering - trying to pass a hash reference to a subroutine intact where I can call "sub($student)" and receive that hash reference in @_ |
14:20 |
csharp_ |
my ($student, $district_code) = @_; |
14:20 |
csharp_ |
^^ the first line of the sub |
14:21 |
csharp_ |
but when I try to access the hash elements by key, perl complains that I haven't declared %student |
14:25 |
jeffdavis |
My first guess would be that you're doing $student{barcode} rather than $student->{barcode} |
15:28 |
jeffdavis |
csharp_: I've shared a potential fix for bug 1932051 (building on your branch) |
15:28 |
pinesol |
Launchpad bug 1932051 in Evergreen "AngularJS Add to Item Bucket Generating too many simultaneous requests" [High,Confirmed] https://launchpad.net/bugs/1932051 |
15:31 |
csharp_ |
jeffdavis: ooh - nice - I'll take a look |
15:36 |
csharp_ |
jeffdavis++ # your -> suggestion was my issue - thanks! |
15:36 |
jeffdavis |
:) |
16:19 |
|
Keith-isl joined #evergreen |
16:19 |
|
alynn26 joined #evergreen |
17:14 |
|
mmorgan left #evergreen |
18:01 |
pinesol |
News from qatests: Testing Success <http://testing.evergreen-ils.org/~live> |