Time |
Nick |
Message |
00:35 |
|
bbeeAF joined #evergreen |
00:35 |
pinesol |
bbeeAF: git diff origin/hamster |
01:04 |
|
hankUJ joined #evergreen |
01:22 |
|
legittd joined #evergreen |
01:53 |
|
Onigirivs joined #evergreen |
02:10 |
|
stmuk_yd joined #evergreen |
02:55 |
|
TippinTaco___ joined #evergreen |
02:56 |
|
nek0rn joined #evergreen |
03:57 |
|
{AS}Ww joined #evergreen |
05:27 |
|
svennMj joined #evergreen |
05:27 |
pinesol |
svennMj: http://cat.evergreen-ils.org.meowbify.com/ |
06:10 |
|
lankybC joined #evergreen |
07:04 |
|
agoben joined #evergreen |
07:09 |
|
collum joined #evergreen |
07:12 |
|
rjackson_isl joined #evergreen |
07:16 |
|
sysdeffz joined #evergreen |
07:36 |
|
kmlussier joined #evergreen |
07:44 |
|
rlefaive joined #evergreen |
08:05 |
|
bos20k joined #evergreen |
08:10 |
|
rlefaive joined #evergreen |
08:11 |
|
armenbEK joined #evergreen |
08:21 |
kmlussier |
Good morning #evergreen! |
08:21 |
kmlussier |
@coffee [someone] |
08:21 |
* pinesol |
brews and pours a cup of Kenya Peaberry Deep River Estate, and sends it sliding down the bar to bdljohn |
08:21 |
kmlussier |
@tea [someone] |
08:21 |
* pinesol |
brews and pours a pot of Honey Black Tea, and sends it sliding down the bar to abneiman (http://ratetea.com/tea/health-and-tea/honey-black-tea/7529/) |
08:59 |
|
mmorgan joined #evergreen |
09:06 |
|
jvwoolf joined #evergreen |
09:12 |
abneiman |
kmlussier++ |
09:12 |
abneiman |
that tea sounds delightful |
09:21 |
|
yboston joined #evergreen |
09:24 |
|
Guest86276 joined #evergreen |
09:29 |
|
Dyrcona joined #evergreen |
09:59 |
Bmagic |
Anyone else seeing the hold targeter eat Postgres? We are seeing super long running queries that look like (to me) the hold targeter running. 55 minute queries. Here is an example https://explain.depesz.com/s/3msA |
10:01 |
pastebot |
"Bmagic" at 64.57.241.14 pasted "Hold Targeter long running" (11 lines) at http://paste.evergreen-ils.org/14179 |
10:18 |
Bmagic |
"acp".location not indexed? |
10:22 |
|
plux joined #evergreen |
10:23 |
|
beanjammin joined #evergreen |
10:42 |
csharp |
Seq Scan on record_attr_vector_list mravl looks like the problem to me |
10:42 |
csharp |
but I don't know :-/ |
10:46 |
|
bdljohn joined #evergreen |
10:46 |
Bmagic |
yeah, I've been poking at it, and yep, that's the big issue |
10:47 |
Bmagic |
CREATE INDEX mravl_source_idx ON metabib.record_attr_vector_list USING btree (source); |
10:47 |
Bmagic |
made a big difference |
10:47 |
Bmagic |
took it down to 3 minutes |
10:48 |
Bmagic |
here is the explain analyze with the index in place: https://explain.depesz.com/s/fNf6 |
10:50 |
Bmagic |
now it looks like it could be improved again with an index on acpm.target_copy maybe? |
10:54 |
dbwells_ |
Bmagic: I'm scratching my head at how you didn't already have an index on metabib.record_attr_vector_list(source). It is the primary key of that table. |
10:55 |
Bmagic |
weird |
10:55 |
Bmagic |
you're right, it's clearly the primary key |
10:56 |
Bmagic |
that means that it should be indexed without explicitly telling it to (just talking out loud) |
11:00 |
|
khuckins joined #evergreen |
11:01 |
Bmagic |
it got better when I added the index though.... |
11:08 |
|
Christineb joined #evergreen |
11:47 |
|
beanjammin joined #evergreen |
11:56 |
|
aabbee joined #evergreen |
12:03 |
|
bos20k joined #evergreen |
12:03 |
|
lucas_ joined #evergreen |
12:09 |
|
jihpringle joined #evergreen |
12:15 |
|
benfRa joined #evergreen |
12:36 |
|
emilbayeskx joined #evergreen |
12:39 |
|
yboston joined #evergreen |
12:53 |
|
beanjammin joined #evergreen |
13:07 |
|
yboston joined #evergreen |
13:23 |
|
khuckins joined #evergreen |
14:03 |
|
jvwoolf joined #evergreen |
14:08 |
miker |
Bmagic: vac-analyze. if the data changes fast enough (reingest, batch update, settings that slow down autovacuum) then autovacuum can't keep up. all comes down to tuning... :) |
14:30 |
|
jvwoolf1 joined #evergreen |
14:53 |
Bmagic |
miker: Are you saying that it needs vac-analyze performed manually at this point? |
15:01 |
miker |
Bmagic: if the planner is generating a terrible plan, and then a new index (which gets accurate stats because of its creation) helps, your stats are out of date. tl;dr: yep, I'd recommend a vac analyze on the table (verbose, if you want to see how much bloat there is(-ish)) and then drop the new index -- it's not needed and just wastes time and space |
15:01 |
Bmagic |
ah! Thanks |
15:04 |
|
mmorgan1 joined #evergreen |
15:05 |
|
khuckins joined #evergreen |
15:22 |
|
jvwoolf1 left #evergreen |
15:28 |
|
jvwoolf joined #evergreen |
16:12 |
|
mmorgan joined #evergreen |
16:19 |
|
Dyrcona joined #evergreen |
16:22 |
|
dbwells joined #evergreen |
16:50 |
mmorgan |
Should bug 1743604 have status Fix Released? |
16:50 |
pinesol |
Launchpad bug 1743604 in Evergreen "Hatch can't find Java if it isn't in the system path" [Medium,Fix committed] https://launchpad.net/bugs/1743604 |
16:53 |
Dyrcona |
Well, it was never targeted at a milestone, or rather the milestone was removed, and when doing the bug wrangler dance, we typically only look at bugs that are targeted at milestones. Which is a long way of saying, "Yes" and trying to explain why it didn't get that status. |
16:55 |
Dyrcona |
Though if it is in the Hatch repo, the milestones from Evergreen don't matter. So I should probably undo that, since I don't know how Hatch releases work. :P |
16:56 |
* mmorgan |
isn't sure how hatch releases work either :) |
16:56 |
Dyrcona |
Never mind, berick says Chrome store was updated with the fix, so that makes it "released." |
16:56 |
Dyrcona |
:) |
16:57 |
mmorgan |
Dyrcona++ |
16:57 |
Dyrcona |
That's nothing.... :) |
17:08 |
|
woodcruftkA joined #evergreen |
17:10 |
|
mmorgan left #evergreen |
17:19 |
|
remingtron joined #evergreen |
17:55 |
|
faction joined #evergreen |
18:35 |
|
jvwoolf joined #evergreen |
18:56 |
|
derklingng joined #evergreen |
19:07 |
|
jvwoolf left #evergreen |
19:13 |
|
pulecDJ joined #evergreen |
20:47 |
|
emmavp joined #evergreen |
23:39 |
|
ircbrowseJx joined #evergreen |