Time |
Nick |
Message |
00:50 |
|
mceraso joined #evergreen |
04:23 |
|
mceraso_ joined #evergreen |
05:04 |
pinesol_green |
Incoming from qatests: Test Success - http://testing.evergreen-ils.org/~live/test.html <http://testing.evergreen-ils.org/~live/test.html> |
07:14 |
|
23LAAVGRD joined #evergreen |
07:14 |
|
timlaptop joined #evergreen |
07:33 |
|
rjackson-isl joined #evergreen |
07:39 |
|
kmlussier joined #evergreen |
07:40 |
|
collum joined #evergreen |
08:08 |
|
akilsdonk joined #evergreen |
08:27 |
|
Dyrcona joined #evergreen |
08:27 |
|
mrpeters joined #evergreen |
08:42 |
|
dkyle left #evergreen |
08:53 |
|
mmorgan joined #evergreen |
08:53 |
|
ericar joined #evergreen |
08:59 |
|
dkyle joined #evergreen |
09:17 |
|
ningalls joined #evergreen |
09:32 |
|
yboston joined #evergreen |
10:10 |
|
wjr joined #evergreen |
10:55 |
csharp |
ok - we noticed, like jeff did that the authority.by_heading and authority.by_heading_and_thesaurus indexes were lost during our pg_restore process... |
10:55 |
csharp |
I now have a ticket that authority validating is slow - that would be the likely cause, right? |
10:55 |
eeevil |
csharp: yessir |
10:55 |
csharp |
eeevil: thanks! just wanted to confirm ;-) |
10:56 |
eeevil |
csharp: if you're still on <=9.2 you can just recreate them (use CONCURRENTLY :) ) |
10:57 |
eeevil |
otherwise, it's tricky ... I'm hoping others will look at my fix for 2.6 for that ... bug 1253163 (thanks, dbwells, for your additions on that one so far!) |
10:57 |
csharp |
yeah - we're on 9.1 |
10:57 |
pinesol_green |
Launchpad bug 1253163 in Evergreen "authority indexes can fail on Postgres 9.3.0" (affected: 2, heat: 12) [Critical,Confirmed] https://launchpad.net/bugs/1253163 |
10:58 |
csharp |
eeevil: does CONCURRENTLY get along with slony (that you know of)? |
10:59 |
csharp |
slony+= |
10:59 |
csharp |
slony+- #rather |
10:59 |
eeevil |
csharp: slony won't care about indexes at all, unless they are unique and break the semantics of existing constraints |
10:59 |
eeevil |
neither of those indexes should be unique, though |
10:59 |
csharp |
eeevil: ah! even better then |
11:00 |
eeevil |
and even then, if you do it on the master first (so the data being replicated is guaranteed to pass the constraints) you can create new unique indexes |
11:03 |
|
kayals joined #evergreen |
11:21 |
|
afterl joined #evergreen |
11:54 |
|
jeffdavis joined #evergreen |
12:09 |
|
jihpringle joined #evergreen |
12:15 |
|
sseng joined #evergreen |
12:39 |
jeffdavis |
what are the search issues with production data that dbwells mentioned in the RC update email? |
12:39 |
jeffdavis |
we've got a 2.6 beta test server with 200K bibs, can do some testing there if it would help |
12:40 |
dbwells |
jeffdavis: part of the reason for delay is that the issue isn't clear yet (to me, at least) |
12:40 |
bshum |
jeffdavis: One question, did you do any reingests on that test system already? |
12:41 |
bshum |
The initial issue I would see is that 1) no format icons would be visible, and 2) using any of the search filters would not find any specific materials |
12:41 |
bshum |
Till something gets reingested somewhere, somehow |
12:41 |
bshum |
And the "somehow" is the part I've been trying to get to the bottom of |
12:42 |
dbwells |
jeffdavis: yes, bshum is the mysterious "reporter", so he knows better than I. |
12:47 |
jeffdavis |
for this test server I didn't run the upgrade scripts on production data, we have a test dataset containing a subset of 200K bibs (and other stuff) from our prod system |
12:47 |
jeffdavis |
process was (1) create clean 2.6 database, (2) insert bibs and other data |
12:47 |
berick |
the upgrade scripts aren't forcing a re-ingest, at least not the last time I checked. |
12:48 |
jeffdavis |
I don't see the issues with format icons etc that bshum mentioned. |
12:48 |
berick |
jeffdavis: yeah, on a new DB, it should not be an issue |
12:48 |
_bott_ |
bshum: I've run into both 1 and 2 of your noted issues on my test install. Unfortunately, after a big set of metabib.reingest_record_attributes(), I still have a few that don't get searched, and I can't explain |
12:49 |
_bott_ |
...and I agree on the "somehow". The noted function didn't seem to be getting called unless I did so directly. |
12:49 |
bshum |
_bott_: What we found is that some of our bibs had weird encodings on them. Like a DVD that was encoded as a laserdisc in OCLC. |
12:50 |
_bott_ |
yep, found a few of those too. But I have a couple that seem to "fit", but aren't getting the expected vlist entries |
12:50 |
|
Wyuli joined #evergreen |
13:01 |
eeevil |
_bott_: you mean to say that a reingest of a record, with the appropriate "force on same marc" global flag set to true, does not reingest the record attributes? |
13:04 |
|
phasefx joined #evergreen |
13:04 |
eeevil |
for all, there are 3 new stock record attributes as of CRA+MVF+MR_holds: icon_format, search_format, mr_hold_format |
13:06 |
_bott_ |
eeevil: heh, no, I mean that when it's set to false, setting deleted='f' does not reingest the record attributes! ;-) |
13:06 |
|
mcooper joined #evergreen |
13:07 |
|
krvmga joined #evergreen |
13:07 |
eeevil |
they are all related, but different. for bshum's issues, you can ignore mr_hold_format. both icon_format and search_format will require an attribute reingest. the values for bothe are, by default, based mainly on different combinations of item type and item form, with a smattering of sound and videorecording formats, and other trace elements, which are in turn MVFs in the new world |
13:08 |
eeevil |
so, for complete coverage (assuming the stock config is forgiving enough with its catch-all values to account for some of the more odd cataloging in the wild), complete attribute reindexing is indicated |
13:09 |
_bott_ |
just getting my feet wet in the new world, and trying to play catch-up |
13:10 |
eeevil |
but, if your cataloging is not, um, esoteric ;) you should be able to get by with a post-upgrade targetted attribute reingest of type, form, sound-recording type (which is a new attribute def), search_format and icon_format |
13:15 |
_bott_ |
even on a database far less capable than production, a full attribute reingest only took about 8 hours. It's the few curious anomalies that have me scratching my head. |
13:15 |
eeevil |
however, a hybrid approach is possible, too, to reduce the total off-line time if your site can accept slightly reduced coverage for search_format and icon_format. first, just reingest those two (or three, if you want to include mr_hold_format) immediately, and then do a rolling reingest (whole or attribute-only) live, afterwards |
13:15 |
eeevil |
_bott_: are you going to the conf? |
13:16 |
_bott_ |
indeed |
13:17 |
eeevil |
if the stars align, we may be able to find a little time to do a high-bandwidth investigation there. eh? |
13:18 |
_bott_ |
I thought you'd never ask |
13:18 |
eeevil |
;) |
13:18 |
_bott_ |
but seriously, I've only looked into it briefly, and it could be simple MARC weirdness. |
13:19 |
eeevil |
bshum: if a full attr reingest doesn't help you, do you think you would have a couple representative cases to look at? |
13:19 |
eeevil |
_bott_: I could take a quick look now, if you have links ready-made |
13:20 |
_bott_ |
Just looking to see if I have this host exposed externally... |
13:25 |
bshum |
eeevil: A full reingest with force on same marc flag toggled tends to fix all the samples I've tried so far on my end. |
13:25 |
bshum |
Doing the partial reingest on just those attributes on those bibs that went missing didn't do anything though. |
13:26 |
bshum |
I've been trying to understand where things live |
13:26 |
bshum |
Am I right that things are somehow stored in metabib.record_attr_vector_list |
13:26 |
bshum |
And that vlist there has some group of numbers that means something to someone? |
13:29 |
_bott_ |
http://216.120.186.140/eg/opac/record/46696286 I believe the attribs are correct as: item_type:a item_form:s bib_level:m ...should be an eBook, but ends up as a Book. |
13:29 |
eeevil |
bshum: yes :) |
13:30 |
eeevil |
bshum: so, vlist contains the ids from config.coded_value_map and metabib.uncontrolled_record_attr_value. the latter are negative |
13:30 |
bshum |
Ah, I was wondering what the negative numbers were meant to correspond to |
13:31 |
_bott_ |
deja vu, this is what I discovered on Fri. |
13:31 |
eeevil |
bshum: using the intarray extension, we can use an index to query metabib.record_attr_vector_list.vlist with a specialized search syntax, akin to tsearch |
13:31 |
eeevil |
so we can say, in one go "contains X and Y and (A or B or C), and not Q" or the like |
13:32 |
eeevil |
for both controlled values (coded value maps) or uncontrolled strings (metabib.uncontrolled_record_attr_value) |
13:33 |
eeevil |
(side note: we used to store title/author sort values in the metabib.record_attr.attrs hstore. now we dedicate a table to sort values) |
13:34 |
eeevil |
that complex querying capability means we can encode a query /within a coded value map/, which is exactly what Composite Attributes do |
13:35 |
_bott_ |
Hmm... difference I'm finding in eBooks that look as expected and the noted issue, an ELvl of M vs. K |
13:35 |
* _bott_ |
puts on my MARC dunce cap |
13:35 |
eeevil |
so when we reingest a record's attributes, first we do the controlled, non-composite ones, then the uncontrolled ones (no ccvm entry), then the composite attrs |
13:37 |
bshum |
I'm looking for another example book that's missing from our searches to go see what the vlist entries are. |
13:37 |
bshum |
And see if it has anything obviously missing |
13:37 |
eeevil |
to do the composite attr tests (which can /also/ be multi-valued), we look at the vlist after the controlled non-composite portion is updated (basically the positive numbers in the array) and see which, if any, of the composite attr defs for each composite attribute matches the record's data |
13:39 |
bshum |
The query that was suggested previously included reingests on sr_format instead of search_format, so I initially thought that's where things went wrong for me, but doing the reingest for search_format didn't make the record appear in my searches either (reset memcache too, just in case it was just remembering bad results). |
13:41 |
bshum |
But now I want to verify that the right coded values are where I expect them to be. Or not. |
13:41 |
bshum |
I do know that a full reingest took the vlist from one list of numbers to a much longer list of numbers. |
13:41 |
bshum |
But I didn't know what the numbers meant yet to compare. |
13:41 |
bshum |
So I'll do a little more prodding now. |
13:42 |
eeevil |
bshum: there are several view to help with that. metabib.record_attr_flat is an important one |
13:42 |
bshum |
eeevil: Ooh, fancy |
13:42 |
eeevil |
that gives you the textual representation similar to metabib.record_attr, but includes all MVF values |
13:43 |
* csharp |
tunes in to this discussion, knowing for sure it will affect him later ;-) |
13:43 |
eeevil |
metabib.record_attr is still there, too, but that is now a view that just gives you the first (database-order) value for any of the MVF attrs |
13:44 |
bshum |
csharp: Slacker :) |
13:45 |
* csharp |
sips on fruity drink with a little umbrella by the pool |
13:46 |
bshum |
Oh that view is very handy |
13:46 |
bshum |
I like it :) |
13:47 |
csharp |
we need to upgrade our "next" server to 2.6 |
13:47 |
* csharp |
adds to position number 12,322 on his to-do |
13:47 |
dbwells |
bshum: you keep saying something about 'search_format', but I don't see that in the list of attr definitions. Where are you seeing that? |
13:48 |
bshum |
Hmm |
13:49 |
_bott_ |
handy indeed! metabib.record_attr_flat tells me that my problem record does not have an item_form. A direction to search! |
13:49 |
bshum |
dbwells: I noticed it in my config.record_attr_definition |
13:49 |
dbwells |
bshum: okay, I see it now in the upgrade files, now to figure out why I don't have it. |
13:49 |
bshum |
And it's referred to by several entries in my config.coded_value_map.ctype I think |
13:50 |
bshum |
Oh okay |
13:50 |
eeevil |
bshum: I sure hope it is! :) |
13:50 |
bshum |
I was starting to worry I had gone on some crazy upgrade path :) |
13:51 |
csharp |
rather than the True Way of packaged releases? ;-) |
13:51 |
* csharp |
ducks |
13:51 |
_bott_ |
what are these packaged releases you speak of? |
13:53 |
csharp |
_bott_: only those of us in the Way are aware of their existence... all else is git commits |
13:54 |
_bott_ |
We've been meaning to speak to csharp: about being "in the way" |
13:54 |
csharp |
_bott_++ |
13:55 |
dbwells |
bshum: sorry, found it. Part of berick's "repairs" branch which I hadn't applied to the particular DB I was on. |
13:56 |
dbwells |
bshum: yes, you will definitely need that one :) |
13:56 |
dbwells |
bshum: did you at any point try the non-specific attr-only reingest? |
13:59 |
bshum |
dbwells: Which is that again? is that like select count(metabib.reingest_record_attributes(id,'{icon_format,sr_format,mr_hold_format,search_format}'::TEXT[],marc,false)) from biblio.record_entry where id > 0 and not deleted; |
13:59 |
bshum |
If so, then yes. |
14:00 |
eeevil |
bshum: no, like this: |
14:00 |
bshum |
Oh, something different. |
14:00 |
eeevil |
select count(metabib.reingest_record_attributes(id,NULL::TEXT[],marc,false)) from biblio.record_entry where id > 0 and not deleted; |
14:00 |
bshum |
Hmm |
14:00 |
eeevil |
omitting the specific list of attrs |
14:00 |
bshum |
That does look different. I will see what that does. |
14:00 |
dbwells |
right, without specifying the attrs you want |
14:00 |
bshum |
On one bib first |
14:01 |
dbwells |
Yes, iterative testing with hour+ waits can be a drag :) |
14:02 |
bshum |
Especially when reloading the fresh DB to then re-run all the upgrade scripts for it :) |
14:05 |
bshum |
Well, the vlist entries changed slightly doing that on the sample bib |
14:05 |
bshum |
Resetting the memcache and trying a fresh search next |
14:05 |
dbwells |
bshum: what is the record being tested, at this point? |
14:05 |
bshum |
Just one of the many "the help" books that didn't show up in search |
14:06 |
bshum |
When I search for it in production I can find 19300 hits, but in the test server, I can only find 9640 hits |
14:06 |
bshum |
So I have plenty to choose from |
14:06 |
bshum |
If I go direct to the record ID the bib lives, so there is that |
14:07 |
dbwells |
bshum: with such nice roundish numbers, are you sure that isn't a superpage setting difference? |
14:07 |
bshum |
There might be some. |
14:07 |
bshum |
But the book still isn't showing up in search |
14:07 |
bshum |
I'm going to do a full reingest now on that bib and seeing what the vlist says next |
14:07 |
bshum |
And yes, it is superpage, good call dbwells :D |
14:08 |
* bshum |
curses his test server |
14:08 |
bshum |
Well, it looks exactly the same. |
14:08 |
bshum |
Wth |
14:10 |
bshum |
And yet doing the full reingest action makes the bib show up in searches now. |
14:10 |
bshum |
The vlist entries didn't change though from what it looked like after doing the non-specific reingest |
14:10 |
bshum |
So maybe it's not that preventing the bibs from showing up in search |
14:11 |
dbwells |
If we're dealing with different search internal limits, reingesting could cause it to be included just by "chance". |
14:12 |
|
mmorgan left #evergreen |
14:12 |
dbwells |
The records at the tale will have increasingly similar relevance (in the initial built-in sorting), so you might just be excluding different ones. |
14:13 |
bshum |
I was just going by what should have been the first 10 hits or so, none of which showed up in any of my attempts to search for them |
14:13 |
bshum |
Sorting and not seeing them would just push them off to further pages no? |
14:13 |
bshum |
Or am I misunderstanding what you're suggesting? |
14:14 |
dbwells |
Unfortunately, some of the ranking you see happens late in the process. |
14:14 |
dbwells |
So if a record doesn't make the first cut, it won't be there, even if further adjustments might have put it at #1. |
14:14 |
bshum |
Keyword for "the help stockett" (title + author) and I get 16 hits in production vs. 8 hits in test. |
14:16 |
dbwells |
Can you adjust your internal search limits on the test server, or is that out of the question? |
14:17 |
dbwells |
but yes, that more specific test should be a better one overall for ruling more things out. |
14:17 |
bshum |
I can probably do that. You mean just changing the opensrf.xml settings? |
14:17 |
dbwells |
right |
14:19 |
bshum |
I can make that the same. Just for parity. |
14:29 |
bshum |
Alrighty, tweaked and trying another search test. |
14:29 |
|
Wyuli joined #evergreen |
14:30 |
bshum |
I'm still not seeing half the books or audiobooks that should have retrieved in the test. |
14:31 |
dbwells |
bshum: well, at least the broad search isn't off by 10,000 any more, but yeah, not seeing what we should when adding "stockett" |
14:31 |
bshum |
Given that the vlist entries aren't changing between those reingests, I guess maybe it's not whatever we're doing there, but something else that a full reingest does that would cause things to suddenly become findable through search again. |
14:32 |
dbwells |
bshum: just to triple check that, can you do this: SELECT metabib.reingest_record_attributes(2895210,NULL::TEXT[],marc,false) |
14:33 |
bshum |
Yep. |
14:33 |
dbwells |
? It's the first hit of the stockett search. |
14:33 |
dbwells |
cool |
14:33 |
bshum |
It should be. In production. |
14:33 |
dbwells |
right |
14:34 |
dbwells |
I am looking directly at spork1 vs acorn now |
14:34 |
bshum |
It doesn't like your command. |
14:34 |
bshum |
"marc" |
14:34 |
dbwells |
bshum: oh, sorry, wrong shortcut. |
14:35 |
dbwells |
select count(metabib.reingest_record_attributes(id,NULL::TEXT[],marc,false)) from biblio.record_entry where id = 2895210; |
14:35 |
dbwells |
I imagine that's what you did before. |
14:35 |
bshum |
Yeah, on other bibs |
14:36 |
bshum |
I haven't picked this one yet :) |
14:36 |
bshum |
Keeping some in reserve for just these moments. |
14:36 |
bshum |
The vlist entries are the same |
14:37 |
eeevil |
bshum: are you actually limiting on a format? |
14:37 |
eeevil |
in the search |
14:37 |
dbwells |
Not seeing it show up yet, so you're right, it's something else. At least I think we've established that with some certainty now. |
14:37 |
bshum |
Nope. |
14:37 |
bshum |
No limits on the search |
14:38 |
bshum |
Though limiting on the format doesn't help either cause the bib just doesn't show up |
14:38 |
eeevil |
bshum: then attrs are not involved. sorry, I assumed you were testing that |
14:38 |
eeevil |
right. your bibs are not properly ingested at this point in some other way |
14:38 |
dbwells |
bshum: Should we just start walking through the various tables and see what's missing? |
14:38 |
bshum |
dbwells: I guess that's what I should start thinking through. |
14:39 |
eeevil |
are there entries for the missing records in metabib.metarecord_source_map? |
14:39 |
eeevil |
if not, run the quick MR mapper SQL |
14:40 |
bshum |
Hmm |
14:40 |
dbwells |
since metarecords are the other big change here, I agree that's a good place to start |
14:40 |
bshum |
So I should expect to see an entry for every bib as metabib.metarecord_source_map.source? |
14:40 |
bshum |
2895210 isn't in there |
14:40 |
eeevil |
there you go |
14:41 |
eeevil |
please run the quick MR mapper |
14:41 |
bshum |
What's that? |
14:41 |
* bshum |
looks around |
14:42 |
eeevil |
Open-ILS/src/extras/import/quick_metarecord_map.sql |
14:43 |
* bshum |
gives that a whirl |
14:44 |
bshum |
So about 31k inserts and 83k inserts. Guess it found quite a bit |
14:45 |
bshum |
Yay, magic |
14:45 |
bshum |
We have some bibs. |
14:45 |
dbwells |
seems to have fixed it |
14:45 |
dbwells |
:) |
14:45 |
dbwells |
eeevil: Do you know of any (hopefully fixed) bug which would have allowed the map to get so out of date? |
14:45 |
eeevil |
bshum: that was probably caused by the "remove deleted MR leads" bit. I'd suggest we simply tack the contents of the quick MR mapper onto the end of that upgrade script |
14:46 |
bshum |
I was just thinking about the deleted MR leaders |
14:46 |
bshum |
+1 |
14:46 |
dbwells |
ah |
14:47 |
eeevil |
going forward, that should not happen because deleted leads are handled better ... but something to keep an eye on |
14:53 |
dbwells |
While the fix (running quick MR mapper) is pretty painless, this has me a bit worried, since it seems like f4d5813d7 may not have worked quite right, which might point to a problem with the script itself, or the underlying remap function. |
14:53 |
pinesol_green |
[evergreen|Mike Rylander] LP#1284864: Forcibly update deleted MR masters - <http://git.evergreen-ils.org/?p=Evergreen.git;a=commit;h=f4d5813> |
14:54 |
dbwells |
Or maybe it was just an issue with how the upgrade happened in bshum's particular case. |
14:56 |
bshum |
For my database, I did use the original query I wrote which didn't take into account the library setting as adjusted by eeevil. But maybe there's more to it than that. |
14:58 |
eeevil |
dbwells: looking at it now |
14:59 |
bshum |
I can always run a fresh reload tonight and see what applying everything all over again looks like (now that we have some more fixes along the way too). |
14:59 |
bshum |
Though I'd be curious to see what _bott_ or others |
14:59 |
bshum |
*see |
15:02 |
* _bott_ |
was sidetracked with things-to-do-before-I-go-out-of-town (tm) |
15:02 |
dbwells |
I think one thing I'll add to at least the packaged upgrade script is the "full" metabib.reingest_record_attributes() call. It'll catch all the new defs, and the newly-multied defs, and it probably isn't worth the risk to try and be super picky just to save time. |
15:03 |
eeevil |
dbwells: agreed, and upgraders can adjust to their taste if they so desire |
15:03 |
_bott_ |
dbwells++ that's what I ended up doing, and other than a few issues I've found, I think it worked well |
15:09 |
_bott_ |
I should note. The "few issues" have turned out to be MARC issues, not software issues |
15:09 |
bshum |
I think I can agree with that now that this last hurdle is solved. And I'm +1 to doing more reingesting. |
15:14 |
bshum |
I have to ask folks to resume testing now. |
15:17 |
eeevil |
FWIW, I do see how, previous to 5f03068 and in the face of a disabled ingest.metarecord_mapping.preserve_on_delete flag, bshum's situation could be created and go unnoticed |
15:18 |
pinesol_green |
[evergreen|Mike Rylander] LP#1284864: Don't leak deleted constituent records - <http://git.evergreen-ils.org/?p=Evergreen.git;a=commit;h=5f03068> |
15:21 |
dbwells |
eeevil: Can you elaborate? |
15:22 |
eeevil |
dbwells: trying to boil it all down to a simple statement ;) |
15:22 |
dbwells |
I imagine that might be hard. |
15:24 |
Dyrcona |
"It's complicated." :) |
15:25 |
eeevil |
basically, we were deleting MRs when the lead was deleted. the fkey from source_map to metarecord is ON DELETE CASCADE, so instead of remapping to a still-living lead, we lost them all. but we didn't care because we also stopped requiring mmsm entries for non-MR search |
15:26 |
eeevil |
but then we started caring again with the ingest.metarecord_mapping.preserve_on_delete flag turned on |
15:27 |
eeevil |
and we started caring even more when we brought MR search (piggybacked on MR holds) back to the tpac |
15:29 |
eeevil |
as things stand, we'll only remove MRs when 1) we don't have the retention setting enabled AND the newly-deleted bib was both the lead and the final constituent OR 2) we update a bib and its fingerprint changes and it is the last constituent |
15:30 |
dbwells |
So is it true that we now care about mmsm entries, even if we don't check the "group" box, where we didn't care before? |
15:30 |
eeevil |
and the quick MR mapper is meant to clean up exactly this sort of situation |
15:30 |
eeevil |
dbwells: yes. I'll get the code that's causing that caring in QP ... sec |
15:32 |
eeevil |
dbwells: actually, it's the primary table on the core query FROM list. line 855 of Open-ILS/src/perlmods/lib/OpenILS/Application/Storage/Driver/Pg/QueryParser.pm |
15:34 |
eeevil |
hrm ... well, that does not jibe with my tale above, though... unless $flat_plan had a right join. |
15:34 |
dbwells |
right, that line hasn't changed in quite some time. |
15:34 |
eeevil |
ever, to be exact ;) |
15:34 |
dbwells |
:) |
15:38 |
dbwells |
It might be time to just let this shake out a bit more. Whatever the circumstances that caused it, they certainly might well not exist any more. |
15:39 |
eeevil |
the cause must be local to bshum's test server, then. either some intermediate state, or the forced anti-leakage run (though I can't see how, personally) ... right, what you typed faster ;) |
15:42 |
dbwells |
bshum: If you are up for any more inquiry, are you missing metarecord_source_map entries in production? On the other hand, maybe we can look at it together later this week. |
15:45 |
bshum |
dbwells: I can take a closer look again in a bit. Or yeah, there's always some time soon enough. |
15:45 |
bshum |
It's certainly possible it's some weird intermediate step on my end. I really want to blow the whole thing away and try again now :) |
16:21 |
|
bkuhn joined #evergreen |
16:23 |
bkuhn |
Does Steve Wills ever hang out on this channel? |
16:34 |
|
_bott_ left #evergreen |
16:52 |
|
ericar joined #evergreen |
16:55 |
csharp |
bkuhn: he does occasionally |
16:56 |
bkuhn |
csharp: thanks, if he comes by, tell him to ping me if you don't mind. Thanks. |
16:57 |
|
afterl left #evergreen |
17:03 |
|
kayals joined #evergreen |
17:03 |
remingtron |
bshum: Will there be any attempts at live feeds of conference talks? |
17:06 |
|
ericar joined #evergreen |
17:24 |
|
kmlussier joined #evergreen |
17:34 |
bshum |
remingtron: Hi there, I don't think there'll be any official recording going on or streaming. At least not by conference staff. |
17:34 |
bshum |
Unfortunately, things didn't come together for anything like that to happen. |
17:35 |
bshum |
I don't know if there's unofficial stuff going on though. |
17:35 |
bshum |
:) |
17:50 |
|
Dyrcona joined #evergreen |
18:10 |
phasefx |
hrmm, we don't have an obvious link to Launchpad from the website do we? ah, found it, Community Links at the bottom. and on the downloads page |
18:13 |
bshum |
I think in the old days, it used to be in one of the sidebars too. For the PHP site, pre-wordpress. |
18:14 |
bshum |
If you have suggestions on making that more visible somehow, I'd be happy to help point you in the direction of the wordpress login page, phasefx :) |
18:23 |
phasefx |
mulling it over :) |
18:45 |
phasefx |
bshum: do we have a dev version of the site where we can test ideas, etc? |
18:55 |
kmlussier |
phasefx: I don't think we do. We like to live on the wild side. |
18:55 |
phasefx |
roger that :) |
19:39 |
bshum |
phasefx: Yeah kmlussier is right about the wild side. Long term I'd like to see looking at our hosting infrastructure to see what might be possible. I think poor lupin could use some retasking certain functions. |
19:41 |
phasefx |
wiki is about as wild side as I get for public websites :) |
19:43 |
phasefx |
do think I might be itching to revisit the FAQ's |
19:45 |
phasefx |
don't think I ever heard anyone ask about the "Evergreen Effect" before :-) |
19:45 |
bshum |
They are in fact quite old. |
19:46 |
bshum |
I can't remember when those got relinked back to the wiki... |
19:46 |
bshum |
Someone else must have done that. |
19:46 |
bshum |
Or I'm just slowly losing it. |
19:46 |
bshum |
What day is today again? |
19:48 |
phasefx |
some-day |
20:43 |
|
book` joined #evergreen |
20:49 |
|
bbqben joined #evergreen |
20:52 |
jeff |
good evening. |
20:53 |
bbqben |
indeed - a very good evening |
20:55 |
kmlussier |
bbqben: How's Cambridge? |
20:57 |
bbqben |
in canada cbc means something very different! cambridge brewing co :) |
20:59 |
bbqben |
kmlussier - porters's especially good + empty seats. send help |
21:00 |
kmlussier |
jeff: Are you there too? |
21:08 |
bbqben |
kmlussier: i do not see jeff |
21:13 |
phasefx |
hrm, I get an error with 0864.MVF_CRA-upgrade.sql with DROP VIEW metabib.rec_descriptor; |
21:13 |
phasefx |
ERROR: cannot drop view metabib.rec_descriptor because other objects depend on it DETAIL: view reporter.classic_current_circ depends on view metabib.rec_descriptor |
21:13 |
|
wjr joined #evergreen |
21:14 |
* phasefx |
tries killing that |
21:15 |
phasefx |
yeah, just that view was my stumbling block |
21:16 |
jeff |
kmlussier: indeed i am in Cambridge. Flew in this evening. |
21:17 |
phasefx |
maybe I needed to have followed the advice in 0855 about re-running example.reporter-extension.sql |
21:17 |
bshum |
phasefx: I just committed a fix for 0864 last night. |
21:18 |
phasefx |
bshum: roger doger |
21:19 |
phasefx |
now for smoke testing :) |
21:20 |
|
bbqben joined #evergreen |
21:20 |
* phasefx |
chokes on smoke; probably something else wrong he did |
21:23 |
phasefx |
yay, all working |
21:27 |
bshum |
Whee, cool. |
21:37 |
eeevil |
phasefx: remember to reinjest :) |
21:37 |
phasefx |
is that needed when going from 2.5.3 to 2.6-beta1? |
21:38 |
* phasefx |
will err on the side of caution |
21:56 |
|
geoffsams joined #evergreen |
22:03 |
|
remingtron_ joined #evergreen |
22:12 |
|
mceraso joined #evergreen |
22:17 |
jeff |
this $18/day hotel wifi is pretty terrible |
22:18 |
kmlussier |
jeff: You shouldn't be paying for wifi. |
22:18 |
jeff |
tip: when someone describes their bandwidth using "oceans" as units instead of megabits, worry. |
22:19 |
kmlussier |
Oceans? |
22:20 |
jeff |
kmlussier: good to know. maybe it won't be there on my bill. the front desk assured me that guest room internet access was not included in the conference room rate, and that the conference had no promo code or other discount |
22:21 |
kmlussier |
OK, good to know. I'll send an e-mail before I head to bed. |
22:22 |
jeff |
kmlussier: the captive portal page describing the internet service used the words "oceans of bandwidth" :-) |
22:23 |
kmlussier |
LOL |
22:24 |
kmlussier |
Well, for the amount of money we're paying, the conference wireless should be "oceans of bandwidth." But I'm afraid it won't be. |
22:24 |
jeff |
pizza and salad place around the corner had decent grub and a tasty CBC lager |
22:24 |
jeff |
other than that, i have not seen much of cambridge yet |
22:26 |
kmlussier |
jeff: I have a passcode for the conference wireless. I don't know if it's the same passcode used in the hotel rooms. It's evergreen. |
22:28 |
jeff |
thanks! i believe they are two different things, if what the frtont desk told me was accurate |
22:30 |
kmlussier |
OK, well I've sent an e-mail to check because we definitely have complimentary "high-speed" Internet in the guest rooms. |
22:34 |
jeff |
i'm sure it will get straightened out. :-) |
22:34 |
jeff |
thanks! |
22:37 |
jeff |
"see, here it is on page 7 of the contract: 'two and one half oceans of Internet to be included in rooms booked under the conference code'" |
22:39 |
eeevil |
they're using neil gaiman's definition of ocean |
22:40 |
* jeff |
borrows a bucket from lettie hempstock's grandmother and brings his in-room ocean of internet down to the conference space |
22:45 |
eeevil |
:) |