Time |
Nick |
Message |
01:14 |
|
remingtron_ joined #evergreen |
01:14 |
|
dbwells_ joined #evergreen |
05:12 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
06:41 |
|
berickm joined #evergreen |
06:42 |
|
Callender joined #evergreen |
07:47 |
|
rjackson-isl joined #evergreen |
08:03 |
|
jboyer-home joined #evergreen |
08:17 |
|
akilsdonk joined #evergreen |
08:21 |
|
krvmga joined #evergreen |
08:51 |
|
kbeswick joined #evergreen |
08:54 |
|
Shae joined #evergreen |
08:58 |
|
ericar joined #evergreen |
09:20 |
* csharp |
yawns |
09:21 |
jeff |
morning. |
09:24 |
* berick |
just found a bug in a line of code from 2006-10-02 |
09:24 |
csharp |
good morning ;-) |
09:24 |
berick |
my own line of code, no less |
09:24 |
csharp |
heh |
09:24 |
tsbere |
berick: Well, I imagine you would be the best one to notice it, right? ;) |
09:24 |
csharp |
so *that's* why my feature never worked! |
09:25 |
berick |
tsbere: indeed |
09:25 |
berick |
amuses me i've never encountered it before |
09:26 |
krvmga |
csharp++ |
09:37 |
|
collum joined #evergreen |
09:50 |
|
mllewellyn joined #evergreen |
09:56 |
csharp |
@seen RoganH |
09:56 |
pinesol_green |
csharp: RoganH was last seen in #evergreen 6 days, 20 hours, 44 minutes, and 42 seconds ago: <RoganH> I would say that the NSA has better things to do than monitor this IRC but they funded staff members to play WoW guilds to catch terrorists in Warcraft, so you never know. |
10:00 |
krvmga |
<confession> i just stopped playing WoW a month ago. </confession> |
10:00 |
tsbere |
I can't say I stopped playing WoW. Mainly due to never having started. |
10:01 |
krvmga |
tsbere: you pre-emptively stopped. |
10:03 |
krvmga |
i got hooked on EQ in 1999 when i worked in the advanced systems group at data general. everyone there was playing. |
10:05 |
krvmga |
dropped EQ and went to WoW when i saw the south park WoW episode. |
10:06 |
krvmga |
right now, guild wars 2 is my transitional get-out-of-mmorpg tool |
10:06 |
krvmga |
and, no, i don't still live in my parents' basement |
10:07 |
csharp |
krvmga++ |
10:07 |
* csharp |
has a very limited number of video games he plays and doesn't play them that much |
10:08 |
krvmga |
tsbere: i checked the array in extras.tt2 and it's the same on brickhead 4 as on all the other brickheads. i still don't have any explanation for why the display is different. |
10:08 |
tsbere |
krvmga: Look for other differing files, then? |
10:08 |
tsbere |
krvmga: Perhaps checksum the entire templates dir on one of the others, then check the list on 4, so that you know what files are different? |
10:09 |
krvmga |
tsbere: all the files in that directory are byte-for-byte the same as on the other brickheads. could there be a caching problem? |
10:09 |
krvmga |
tsbere: i'll do that. |
10:09 |
krvmga |
tsbere++ |
10:09 |
tsbere |
krvmga: I would check config files too, for that matter |
10:09 |
tsbere |
krvmga: Bad apache or opensrf configs compared to the other bricks might cause issues |
10:10 |
krvmga |
tsbere: interesting thought. i'll pursue that, too. |
10:16 |
|
akilsdonk joined #evergreen |
10:29 |
|
DPearl joined #evergreen |
10:57 |
|
yboston joined #evergreen |
13:03 |
|
hbrennan joined #evergreen |
13:04 |
|
ldw joined #evergreen |
13:08 |
|
ldw joined #evergreen |
13:50 |
|
jboyer-home joined #evergreen |
14:21 |
|
RoganH joined #evergreen |
14:45 |
Bmagic |
I read somewhere about "deadlocking" the bre table |
14:45 |
Bmagic |
I would like to know about that and what creates that situation |
14:47 |
Bmagic |
I have done a ton of expierementing upgrading our postgres 9.1 DB -> 9.2 and then upgrading the Evergreen database using the DB upgrade scripts up to 2.6.... At this point, I cannot get any search results using the OPAC. I am afraid that perhaps I have run into this deadlock situation. (The reingest is running but not finished) |
14:49 |
tsbere |
Bmagic: The problem you are running into is likely "the reingest results are not yet available to other queries" |
14:50 |
Bmagic |
tsbere: I also had the database in a state right after the upgrade to 2.6 and before doing the reingest - still no search results - is the reingest absolutly required in order to get results? |
14:50 |
tsbere |
Bmagic: Depending on a number of factors the reingest may be required. I have not looked too much at various version upgrade paths on that end, though |
14:51 |
Bmagic |
What can I do to peak at the guts of postgres while it's running a query? |
14:52 |
tsbere |
Bmagic: Well, you can't see too much about the query, but you can look for a deadlock. In that case you want to look at the list of waiting locks and see if two processes are waiting on each other. |
14:52 |
Bmagic |
with another sql query I presume? |
14:53 |
tsbere |
Yea. Though it might be easier to use something like pgadmin that can list the locks for you in a graphical window if you aren't comfortable with raw SQL yourself. |
14:54 |
Bmagic |
Ill see what I can find |
15:00 |
tsbere |
Bmagic: I highly suspect you are just dealing with the results if your reingest being in a single transaction, and the results of that transaction won't be visible until it is completed. |
15:01 |
Bmagic |
tsbere: I am doing them one bib at a time |
15:01 |
Bmagic |
tsbere: and again - even before I started the reingest, the opac would return nothing |
15:01 |
tsbere |
Bmagic: One bib at a time != One bib per transaction in this case. |
15:02 |
tsbere |
Bmagic: For example, if before you started reingesting bibs there was a BEGIN statement then you are in a transaction. Otherwise anything that counts as a single statement, including "function call that runs other statements", is also a transaction. |
15:07 |
Bmagic |
tsbere: I understand that, the perl script does the browse ingest one at a time and the search ingest in batches as per Dyrcona's wisdom |
15:09 |
Bmagic |
SELECT metabib.reingest_metabib_field_entries($bibID, TRUE, FALSE, TRUE) |
15:11 |
Bmagic |
that is for the browse and this is for the rest - SELECT metabib.reingest_metabib_field_entries($bibID, FALSE, TRUE, FALSE) |
15:13 |
Bmagic |
tsbere: It looks like the ingest is 1 transaction at a time per bib |
15:13 |
Bmagic |
http://git.mvlcstaff.org/?p=jason/evergreen_utilities.git;a=blob;f=scripts/pingest.pl;h=2b43e9a45b20e10325217f86579002b5c0e14315;hb=cc9a96dc734997946c8027d41c9252a545dc05a8 |
15:15 |
jboyer-home |
Bmagic: How many concurrent bibs is your parallel reingest running vs. the number of cores in your db? Depending on how hard it’s working, you may be pushing all of the necessary data out of cache and the db can’t keep up with searching and reingesting without the request timing out. We’ve had that problem during past upgrades. |
15:15 |
jboyer-home |
If you can eventually get results by performing the same search repeatedly that’s likely what’s going on. |
15:16 |
Bmagic |
jboyer-home: If we take the reingest out of the equation - the opac still does not return results. Which is what prompted me to do the ingest. I was thinking that perhaps I needed to reingest in order for the search results to start working. Is that right? |
15:18 |
jboyer-home |
Depends on how much of an upgrade you did and how much the search has changed. It’s possible that pre-reingest things can’t work, or it could also be that a cold start on your db takes a while to get runnning. (I think I have a 2.6 DB sitting around that hasn’t been reingested, but I don’t have any opensrf services pointing at it yet to test) |
15:24 |
|
gsams joined #evergreen |
15:26 |
|
vlewis joined #evergreen |
15:32 |
Bmagic |
jboyer-home: my current theory is the 9.1 -> 9.2 is the culprit - I am setting up another VM and this time I will leave it with 9.1 and upgrade to 2.6. Guess and check method |
15:33 |
jboyer-home |
Bmagic: That, or the order of upgrades. I think I’ve heard mention of upgrading Eg first, then Pg, but that may have only mattered for Pg 9.3, I don’t remember off hand. |
15:33 |
jboyer-home |
Good luck in either case, I’m curious how it turns out. |
15:35 |
Bmagic |
jboyer-home: I was thinking that too, I think I did the reverse - 9.1 to 9.2 and then upgrade Evergreen DB.... so this time, I am going to leave it at 9.1 and upgrade the EG DB to 2.6 and see if I can get search results... ARRRG |
15:37 |
bshum |
Bmagic: Do you use any filtering by default for formats, etc. in your searches? I found that for 2.6 until I ran the full reingests to get my formats indexed, none of my bibs would show up if I were to say search for just books, etc. |
15:37 |
bshum |
Bmagic: Also, what version are you upgrading from? |
15:38 |
Bmagic |
bshum: no options, plain search for "harry potter" default all formats. Upgrading from 2.4.1 |
15:38 |
bshum |
fwiw, we dumped our DB from PG 9.1 and restored into PG 9.3 before we upgraded our database using scripts. So I wouldn't generally expect it to be trouble. |
15:39 |
Bmagic |
bshum: the only method that I have working is installing 9.2 on the same machine where the 9.1 DB is located, and using the pgupgrade tool |
15:39 |
Bmagic |
bshum: You recommend a better method? |
15:39 |
bshum |
Well we were moving physical servers. |
15:40 |
Bmagic |
Even moving physical servers, how do you restore 9.1 content into 9.3 db? |
15:40 |
bshum |
So that's also why we pg_dump / pg_restore from old DB to new DB, regardless of versioning. I did that back in the day too for 8.4 to 9.1 |
15:40 |
bshum |
The usual, just use pg_dump to create a full copy of the old DB, then pg_restore to put it into a new DB but running with the newer version. |
15:41 |
jeff |
pgupgrade is the new magic. prior to that, you always used pg_dump / pg_restore go to between n.x and n.y versions. :-) |
15:41 |
Bmagic |
gotcha, for some reason that wasnt going to work for me..... perhaps it will |
15:41 |
bshum |
As long as nobody is using the old DB, then I consider it a "static" enough copy. |
15:41 |
bshum |
jeff: Ah right. New magic... pfft ;) |
15:41 |
Bmagic |
:) |
15:42 |
Bmagic |
pg_dump evergreen > outfile.dump will get all of the DB users as well? |
15:44 |
bshum |
That I'm less certain about. I only have a handful of DB users and I tend to create them manually prior to setting up other parts of my DB. My gut feeling is no. |
15:45 |
bshum |
A quick google search makes me wonder on pg_dumpall |
15:45 |
Bmagic |
yeah, I was reading that the other day |
15:46 |
bshum |
Bmagic: Have you looked at the logs to see if there's any sign of what's causing the no results? Is it a timeout or locks on the DB as you seem to suspect. |
15:47 |
Bmagic |
bshum: I have assumed this: cstore asks the db for the results, cstore gives up, then returns apache no results, then the web browser draws the page with all of our templates and "no results" in the contents |
15:49 |
bshum |
Bmagic: That's some assuming :) |
15:49 |
bshum |
I'm curious, when you installed PG 9.2, did you also tune the config file? |
15:49 |
Bmagic |
I did tune the config - good question |
15:49 |
bshum |
I wonder if maybe it's a performance problem with the multiple versions installed. |
15:50 |
bshum |
Hmm |
15:59 |
Bmagic |
bshum: I eliminated the old PG after the upgrade process |
16:00 |
Bmagic |
bshum: that could be part of the issue as well, because there are a ton of dependencies when removing 9.1.... and perhaps some needed by 9.2, so that is a little shakey to |
16:00 |
jboyer-home |
2 Minor data points: I finally got a 2.6 instance built and pointed at my 2.6 db which I’m certain hasn’t be reingested. It timed out 2-3 times (I assume no one has test db servers as nice as their prod machines) |
16:01 |
jboyer-home |
2nd data point: I’m going to squee like a schoolgirl about Ansible at the next Eg Intl conference. (Hint: a couple of hours ago there was no machine to even run 2.6 on) |
16:02 |
jboyer-home |
Re: searches: but it did eventually return results. (Had a concurrency error in my thought processes there) |
16:02 |
Bmagic |
jboyer-home: Are you saying that after the 3rd try, you got results? and yes, ansible is awesome |
16:02 |
jboyer-home |
3rd or 4th, yeah. |
16:03 |
Bmagic |
jboyer-home: See, I never got results, even after 15 tries... and waiting for the DB to stop processing... 20 minutes goes by, search again and nothing |
16:03 |
bshum |
That sounds more like a query that takes too long on a cold DB. And by 3 or 4 attempt, it has finished or cached. |
16:03 |
Bmagic |
bshum: exactly |
16:04 |
jboyer-home |
That’s what I was experiencing, yes, I just didn’t know if Bmagic was having the same issue. Sounds like something is straight up busted though. :( |
16:04 |
Bmagic |
bshum: so I would expect it to fininsh with my original search after an hour (good god I hope not) but that is how much time I gave it, and still no results. Which leads me to the conclusion: The DB was unable to come up with the results |
16:04 |
jboyer-home |
I didn’t know you had waited that long/tried that many times. |
16:05 |
bshum |
Bmagic: Did you use something like SELECT * FROM pg_stat_activity; to see what else is happening on the DB? |
16:05 |
bshum |
Just to see if the query was still going when you started the search |
16:05 |
Bmagic |
bshum: I am now... some interesting things going on in there, but they finish after awhile... then I search the same search... no results |
16:06 |
Bmagic |
bshum: I do see my query getting to the DB, so I know for sure that my search is getting passed to my DB |
16:19 |
|
dconnor joined #evergreen |
16:22 |
Bmagic |
jboyer-home: bshum: tsbere: I am almost done upgrading this db to 2.6 from 2.4.1 - and then the search test again..... drumroll |
16:24 |
jboyer-home |
I am sick with jealousy. It takes close to overnight to go from 2.5.2 to 2.6.1 for us. (on an old antique server, but still) |
16:30 |
|
artunit joined #evergreen |
16:34 |
Bmagic |
jboyer-home: what is the results of du -sh /var/lib/postgresql/9.x/main for you? Ours is 61GB |
16:35 |
jboyer-home |
270 GB last I looked |
16:35 |
Bmagic |
Dang |
16:36 |
Bmagic |
That is some data |
16:36 |
Bmagic |
Moving that around has got to be a pain |
16:36 |
tsbere |
We are around 161GB as of Wednesday, dunno if it has gone up/down since then |
16:36 |
Bmagic |
1GB nics just dont cut it |
16:37 |
tsbere |
Our dump files, however, are only in the 8GB range. |
16:39 |
jboyer-home |
Our dumps take roughly 2-3 hours and are only 20-21GB (I don’t dump the auditor schema, we have 3 copies of the db on live hardware, the dumps are just for end of the world OH NOES and testing) |
16:39 |
jboyer-home |
It’s the test restoring that’s really rough. servers with 128GB of RAM take multiple days. :( |
16:43 |
Bmagic |
jboyer-home: I feel some of that pain over here. We are in a memory drought ourselves |
16:44 |
jboyer-home |
The real deals have 256GB, so it’s not as bad as it could be. |
16:45 |
Bmagic |
awwwwww, 30 minutes into 2.5.3 - 2.6.0 and this "Can't locate XML/LibXSLT.pm in" gotta install that and START AT THE BEGINNING |
16:45 |
jboyer-home |
Ouch. |
16:46 |
Bmagic |
But if that happened after 2 days, I would be really upset |
16:48 |
|
vlewis_ joined #evergreen |
16:48 |
jboyer-home |
I was curious and just checked, apparently a started a restore on our migration testing server last week. I know this because the COPY for asset.copy is still running with an xact_start of 6/13. D: |
16:51 |
|
ldw_ joined #evergreen |
16:54 |
jboyer-home |
Good luck, Bmagic! |
16:55 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
18:10 |
|
dbwells joined #evergreen |
18:28 |
|
mtcarlson_away joined #evergreen |
18:28 |
|
b_bonner joined #evergreen |