15:33 |
Bmagic |
sandbergja++ sleary++ |
15:33 |
sleary |
sandbergja++ # thanks for staying on top of linting! |
15:33 |
Bmagic |
thanks again sandbergja! next.... |
15:33 |
Bmagic |
#topic New Business - Test failures, including at least one critical regression (bug 2043437) - Jane |
15:33 |
pinesol |
Launchpad bug 2043437 in Evergreen "Three test failures on rel_3_12 and main" [Critical,New] https://launchpad.net/bugs/2043437 |
15:34 |
Bmagic |
#link https://bugs.launchpad.net/evergreen/+bug/2043437 |
15:34 |
sandbergja |
oh god, I have a bunch in a row :-D |
15:34 |
sandbergja |
we have 3 failing tests, one of which points to a major problem |
15:34 |
sandbergja |
tests++ # catching that before we released it to users! |
15:35 |
sandbergja |
specifically, the holdings view doesn't load (maybe just a missing import or something) |
15:35 |
terranm |
sandbergja++ tests++ |
15:35 |
sandbergja |
I feel pretty strongly we should take care of those before building a beta. |
15:35 |
mmorgan |
+1 |
15:35 |
sandbergja |
But I don't know that I'll have much time to look into them |
15:46 |
Bmagic |
let's move this to post-meeting |
15:46 |
Dyrcona |
JBoyer: Gotcha. |
15:46 |
JBoyer |
Bmagic, +1 |
15:47 |
Bmagic |
#topic New Business - How can we get computers running our tests regularly again? - Jane |
15:47 |
eeevil |
I'll also look at the search one, later |
15:47 |
sandbergja |
#info for anybody wanting to run the tests, or try it out: https://wiki.evergreen-ils.org/doku.php?id=dev:contributing:qa#types_of_tests |
15:47 |
Bmagic |
sandbergja++ |
15:47 |
mmorgan |
sandbergja++ |
15:48 |
sandbergja |
I am just feeling fired up about tests, and wanted to see if there's capacity for getting buildbot running them automatically for us, or some new solution |
15:48 |
Bmagic |
sandbergja: where's the buildbot now? (I've never known where that lives and who's in charge of it) |
15:49 |
shulabear |
sandbergja++ |
15:49 |
sandbergja |
no clue. Was it an EOLI server? |
15:52 |
sandbergja |
the ng lint always passes, for reasons mentioned above hahaha |
15:52 |
Bmagic |
haha |
15:52 |
Bmagic |
not sure if we've arrived at anything I can put down as action |
15:53 |
sandbergja |
I can investigate getting more tests into gh actions, if there aren't concerns with tying ourselves more to that platform |
15:53 |
Bmagic |
#action sandbergja will investigate getting more tests into gh actions |
15:53 |
JBoyer |
It doesn't have to be the projects definitive home to provide a useful function, even if temporarily, |
15:54 |
Bmagic |
almost out of time |
15:55 |
eeevil |
#info I've requested we keep XMPP as the default OpenSRF transport in EG main for the time being. There's no redis release of OpenSRF yet, so support is only in a side branch, and having redis be the default will make dev (especially backport-focused dev, like bug fixes) more painful because you can't just switch branches and test the code. Also, I'm not convinced that it's deeply understood by more folks than berick, and that puts a lot of pressure |
15:55 |
eeevil |
on him to Fix All The Things if Things need Fixing. I'm asking here for any strong objections to applying the 2 existing commits that will make that so, as it is now for rel_3_12. Barring any, I'll pick those commits into main and life will be a little simpler for all the not-testing-redis cases, for now. |
15:56 |
eeevil |
(separately, I think I know where the search test failure is coming from, and I'll poke at it early tomorrow) |
15:57 |
Bmagic |
eeevil: and when we've all installed and tested redis, then make it default? |
15:58 |
eeevil |
Bmagic: well, and when more-than-Bill can help work on it, ideally, but yes. "when it and we are ready" |
15:58 |
eeevil |
it's not something we should force Right Now, IMNSHO. but, hopefully, soon |
15:58 |
eeevil |
for a definition of soon somewhere between "months" and "geologic time" |
10:32 |
terranm |
redavis++ cool beans, thx |
10:41 |
redavis |
mmorgan, looking at https://bugs.launchpad.net/evergreen/+bug/1956241. In the patch, it says it adds it maps the missing permission to the concerto circ administrator and circulator permission groups. I'm only seeing it in the circulator permission group. BUT I think the main goal of the patch is to add the permissions. If I'm understanding that correctly, I think it's ready for a sign off. |
10:41 |
pinesol |
Launchpad bug 1956241 in Evergreen "Hold Groups: Staff Users Cannot View Patron Hold Groups" [Medium,Confirmed] |
10:46 |
mmorgan |
redavis: Yes, the goal was to add the permission to some permission groups that made sense, esp for testing. Not sure why the circ admin didn't get them. |
10:48 |
redavis |
++, sign off added |
10:49 |
redavis |
Though Elizabeth Davis (not to be confused with Ruth Elizabeth Davis) did testing before me. |
10:49 |
mmorgan |
redavis++ |
11:03 |
|
briank joined #evergreen |
11:03 |
|
Dyrcona joined #evergreen |
11:10 |
redavis |
woohoo! Used the Axe DevTools and was able to run an accessibility test! |
11:10 |
terranm |
I noted in my comment that the seed data SQL and the upgrade SQL aren't consistent. It doesn't prevent it from working properly in production, but could be a little confusing for test servers. |
11:10 |
redavis |
terranm+ |
11:10 |
redavis |
terranm++ also |
11:16 |
|
Rogan joined #evergreen |
12:13 |
|
jblyberg7 joined #evergreen |
12:21 |
Dyrcona |
Y'know what? I'm done with captchas. If a site asks me to solve a puzzle or some crap, I'm going away and not coming back. So, how is this relevant here? Well, it means no bug report on MARC::Records' RT. Well, let's see what happens if I sign in, first. |
12:24 |
Dyrcona |
It wants you to solve a puzzle to logout of "guest." I know they've had issues with spam, but I'm so sick of "proving" that I'm human. (Also, turns out that bots are better at solving some of the puzzles than humans, so they consider failure to be success, too.) |
12:31 |
pinesol |
News from commits: LP2039612: regression test for creating carousels <https://git.evergreen-ils.org/?p=Evergreen.git;a=commitdiff;h=b418f2dfa8b9ec16d132b74c25c240480d2b5ed0> |
12:31 |
pinesol |
News from commits: LP2039612: Fix Carousel create / edit <https://git.evergreen-ils.org/?p=Evergreen.git;a=commitdiff;h=00bd4c88d60fedaf1a39a7d34a24f0c2e2bd2c00> |
12:34 |
|
collum joined #evergreen |
12:39 |
redavis |
terranm, have signed off on several of the accessibility LP tickets on terran-main.gapines.org. Gonna peace out for while. Maybe the day. Back at it tomorrow as time allows. |
12:52 |
|
collum joined #evergreen |
12:24 |
Stompro |
Dyrcona, what are you seeing now? |
12:41 |
Dyrcona |
Stompro: It's just that one of the exports also tried to pull in consortium-wide URIs. I think everything is OK. I've installed marc_export with the experimental patches to production. |
12:42 |
Dyrcona |
So, I meant that maybe the grep was not causing issues, but that doesn't explain the other library, so I guess it was really the problem. |
12:43 |
Dyrcona |
I'm getting similar numbers in my test environment as production at this point, so that's why I'm throwing it over there. |
12:44 |
berick |
rusty bits for the redis changes from above https://github.com/kcls/evergreen-universe-rs/tree/working/opensrf-bus-addrs-username |
12:45 |
Dyrcona |
berick++ # I was going to ask if I should update my development system for the latest OpenSRF/Redis stuff. Maybe later today..... |
12:46 |
Dyrcona |
Stompro: I'm about to add this to Lp 2015484: https://git.evergreen-ils.org/?p=working/Evergreen.git;a=shortlog;h=refs/heads/collab/dyrcona/lp2015484_exclude_opac_visible_from_export-rebase |
12:46 |
pinesol |
Launchpad bug 2015484 in Evergreen "marc_export: provide way to exclude OPAC-suppressed items" [Wishlist,Confirmed] https://launchpad.net/bugs/2015484 - Assigned to Jason Stephenson (jstephenson) |
12:46 |
Dyrcona |
That's what I'm testing and have installed in production. |
12:48 |
Stompro |
Dyrcona, ahh, I thought you meant the @orgs lookups with grep, from the performance branch. |
12:52 |
Dyrcona |
I was talking about the extra grep in the exclude hidden items branch. |
12:52 |
Dyrcona |
If its any consolation, I had to sit for a minute earlier and rest my brain to focus on one thing at a time. :) |
12:58 |
Stompro |
Dyrcona, understood. |
12:59 |
jeffdavis |
eeevil++ Bmagic++ # jit_above_cost = -1 fixed the thing on our test server also - I would never have figured that out! |
13:12 |
|
collum joined #evergreen |
13:19 |
Stompro |
JBoyer++, thank you for helping me understand the possible roadblocks with NCIPServer and ReShare. Sorry about all the questions. |
13:20 |
Stompro |
Dyrcona has explained to me before about how varied NCIP implementations can be. |
14:39 |
pinesol |
News from commits: LP1965326 Hatch Printer Settings Port Release Notes; Lint. <https://git.evergreen-ils.org/?p=Evergreen.git;a=commitdiff;h=2c707d83d6625137e7a89a7da5035b1bd84abd9f> |
14:39 |
pinesol |
News from commits: LP1965326 Move Hatch Printing to Printer Settings <https://git.evergreen-ils.org/?p=Evergreen.git;a=commitdiff;h=ac97129d9adb5a36026fc25cb44ba4debc56ddc3> |
14:39 |
pinesol |
News from commits: LP1965326 Printer Settings Angular Port <https://git.evergreen-ils.org/?p=Evergreen.git;a=commitdiff;h=b01b14fed8cee098b9911f24f4d656003f7b0f2a> |
15:07 |
Stompro |
JBoyer, if you haven't started replying to my last NCIP email... ignore it for now. I'll test out the AcceptItem and RequestItem requests with AgencyID set to our system shortname to see how it behaves in real life. |
15:08 |
JBoyer |
Stompro++ it was on the list but I'll set it aside now. |
15:32 |
|
jihpringle joined #evergreen |
16:02 |
|
sandbergja joined #evergreen |
10:43 |
|
BAMkubasa joined #evergreen |
10:50 |
|
mantis1 joined #evergreen |
10:55 |
|
briank joined #evergreen |
11:15 |
Dyrcona |
OK. I have the latest Rust marc-export test running. I'll see how long this takes. |
11:27 |
|
jihpringle joined #evergreen |
12:14 |
|
Christineb joined #evergreen |
12:18 |
eeevil |
berick: drive by question (from immediately-post hack-a-way) incoming! meetings will probably keep me away from here for the rest of the day, but want to get it to you asap for RediSRF(?) goodness |
12:20 |
eeevil |
berick: I'm looking at opensrf/redis anew via the collab branches. I have two questions that jump out at me: is the --reset-message-bus documented beyond "you must do this at redis restart" somewhere? (the commit it comes in with isn't obvious to me; also, I want to help remove any instances of boilerplate stuff that we can automate or fold into other actions as early as possible -- ideally before merge -- and I know this is under discussion or at |
12:20 |
eeevil |
least being thought about); and, while the config and password list files don't require it themselves, it /looks/ like the usernames are pinned as "opensrf", "router", and "gateway" in the code that implements each part. is that correct and do you see any reason we /can't/ replace those strings with the value pulled from opensrf_core.xml as with ejabberd? the result being ACL-separated user sets that share a redis instance but can't talk to each |
12:20 |
eeevil |
other or guess each other's user and queue names. |
12:32 |
Dyrcona |
eeevil: berick said earlier that he modified osrf_control to do --reset-message-bus as required, so its not necessary now. I'm testing that and other updates. That's a good question about the names. (I generally don't change them.) |
12:33 |
Bmagic |
I'm troubleshooting a database issue. explain analyze on an insert into biblio.record_entry results in 6+ seconds. The analyze shows me that it spends the majority of the time in reporter.simple_rec_trigger(). Which is odd, because if it's an INSERT, it should just skip down to RETURN. Weird right? |
12:33 |
Dyrcona |
Bmagic: No. It still builds the rmsr entry. |
12:34 |
Dyrcona |
Wait until you update an authority with hundreds of bibs "attached." :) |
12:35 |
Bmagic |
this is EG 3.11.1 with queued ingest. I updated the global flag to turn off queued ingest, and tested it. and visa versa. it's 13 seconds for an insert with it turned off, and 6 seconds with it turned on. But still, 6 seconds seems insane for an insert on biblio.record_entry, especially if we're deferring all of the metabib stuff for the ingester process to come later |
12:35 |
Dyrcona |
Updating/inserting bibs and auths can be slow because of all the triggers. |
12:36 |
Bmagic |
I got here because it's breaking the course reserves interface. Taking too long |
12:37 |
* Dyrcona |
colour me unsurprised. |
16:38 |
Bmagic |
Maybe I can change PG configuration to eliminate the secondary DB to prove it has something to do with that (or not) |
16:38 |
Dyrcona |
Nope. None of them checked in. |
16:40 |
Dyrcona |
Bmagic: Could be. I often have queries fail on the secondary DB, 'cause changed rows from replication are needed. I know that's not your issue, but you shouldn't be inserting into a replicated db, so I'm not sure what you're checking. |
16:40 |
Bmagic |
This is my test query: explain analyze select * from reporter.old_super_simple_record where id=671579 |
16:41 |
Bmagic |
6+ seconds on db1. and less than one second on db2 |
16:42 |
Dyrcona |
OK. Das stimmt. |
16:43 |
Dyrcona |
Hm... I need to find the circulations. The copies are all Lost status.... They're also deleted, and I thought using the copy_id would resolve the latter. |
16:51 |
Bmagic |
Repeated for all tables in metabib |
16:52 |
Bmagic |
retested, and was dissappointed. Restarted postgres, still dissappointed |
16:52 |
Bmagic |
disappointed |
16:55 |
jeffdavis |
Is it consistently ~6 seconds even for other IDs? Like, not 4 or 8 seconds? |
16:59 |
jeffdavis |
We were having an issue on a test server where calls to open-ils.pcrud.search.circ were consistently taking about 6.5s per circ. I never got to the bottom of it (fortunately we're not seeing that in production). It's a long shot but maybe there is a connection? Don't want to send you down a rabbit hole though! |
17:00 |
Dyrcona |
There are only 1 or 2 tables involved in that function. Everything else is 4 views that percolate up. |
17:00 |
jeffdavis |
But the flesh fields for the problematic call include wide_display_entry. |
17:01 |
Dyrcona |
Which is a view, IIRC. |
17:09 |
jeff |
out for now, back later! |
17:10 |
Bmagic |
https://explain.depesz.com/s/Df5A |
17:11 |
Bmagic |
so that seems to have revealed that it's sequential scanning display_field_map |
17:16 |
jeffdavis |
I can confirm we're seeing the same behavior on our test server with your example query (explain (analyze, buffers) select * from reporter.old_super_simple_record where id=x) |
17:16 |
Bmagic |
so weird, two databases, one a replicant of the other. I just synced up those two settings in postgresql.conf and restarted the primary database. The query is still sequentially scanning that table instead of using an index. I run the same query against the secondary database and it's fast |
17:17 |
Bmagic |
jeffdavis: yous is sequential scanning? |
17:17 |
jeffdavis |
test db (slow): https://explain.depesz.com/s/pkwY |
17:17 |
jeffdavis |
prod db (fast): https://explain.depesz.com/s/UWu5 |
17:19 |
Bmagic |
jeffdavis: interesting. In my case, it's prod that's slow and the secondary database is fast |
17:20 |
Bmagic |
what's even more interesting, is for your fast case, it's seq scanning display_field_map and apparently that's not taking much time at all |
17:24 |
Bmagic |
jeffdavis: PG14? |
17:24 |
jeffdavis |
yes |
17:25 |
Bmagic |
jeffdavis++ |
17:25 |
jeffdavis |
The servers are pretty different, the test db is the product of pg_dump/pg_restore rather than replication |
17:25 |
Bmagic |
yeah, I'm tempted to dump/restore as well |
17:26 |
Bmagic |
I out of ideas, so that one comes to mind :) |
17:27 |
jeffdavis |
well, we're seeing it after dump/restore, but who knows, maybe it would help! |
10:03 |
Dyrcona |
This could be a case where joins are slower than subqueries for .... reasons. |
10:04 |
Dyrcona |
It could also be completely different on a newer Pg release. |
10:04 |
Stompro |
I started using NYTProf now, it is much heavier weight, but the output much more detailed. https://metacpan.org/pod/Devel::NYTProf |
10:08 |
Stompro |
Devel::Profile is much faster for quickly seeing results. NYTProf generates a 2G output file in my testing that then has to be processed into an HTML report. |
10:08 |
Dyrcona |
I think I'll go for fiaster/lighter weight. I'll read the docs. |
10:09 |
Dyrcona |
My code to log subroutine names with timestamps as they're called produces a largeish output and is likely less accurate since it spends time generating timestamps and printing them. |
10:10 |
Stompro |
All I had to do was run marc_export with "perl -d:Profile ./marc_export_testing" and it generates the profile log prof.out |
10:10 |
Dyrcona |
I'll make a patch and throw it on the LP. |
10:11 |
Dyrcona |
I mean a patch for my logging code. |
10:11 |
Dyrcona |
So, you think we should just switch from insert_grouped_field to insert_field? |
10:13 |
Stompro |
I put a diff of the changes I was testing with at https://gitlab.com/LARL/evergreen-larl/-/snippets/3615366 |
10:14 |
Stompro |
Put all the 852s in an array, then once they are all added, call insert_grouped_field for the first one to get the same ordering, and use insert_fields_after for the rest with one call. |
10:18 |
Dyrcona |
Stompro: There's a simpler way to do the insert: push the fields to an array, then do the first one with shift and the rest of the array after that. |
10:19 |
Stompro |
I figured, my perl array skills need work. :-) |
10:20 |
Dyrcona |
I wonder if the first one even needs to be grouped? |
10:20 |
Dyrcona |
I'm going to look at MARC::Record again. |
10:21 |
Dyrcona |
Stompro++ # For the notes in the snippets. |
10:21 |
Stompro |
In my test data, the 901 tag would be placed before the 852 without using the insert_grouped_field for the first. |
10:23 |
Stompro |
I don't think MARC::Record re-orders the fields. |
10:24 |
Stompro |
If I'm understanding where you are going with that. |
10:33 |
Dyrcona |
OK. LoC says the records are supposed to be grouped by hundreds, they don't have to be in order. |
10:41 |
|
briank joined #evergreen |
10:44 |
Dyrcona |
Oh! That patch I threw on Lp is missing a local change to format the microsecond timestamps to %06d..... |
10:53 |
Dyrcona |
Heh. This branch is a mess.... |
10:54 |
Dyrcona |
So, I was testing with a dump of 1 library's records. It took about 1 hour 4 minutes to run. I'll make a change based on Stompro's bug description and see what happens. |
11:06 |
Dyrcona |
OK. Here goes.... |
11:11 |
Stompro |
Dyrcona, does this library have some of the bibs with large numbers of copies? |
11:20 |
Dyrcona |
I don't know. I doubt it. It doesn't seem to have made much difference so far. I'll try a larger library or the whole consortium next. |
11:20 |
Dyrcona |
It's one we do a weekly export for, so that's why I chose it to test. |
11:28 |
Dyrcona |
It does use slightly less (~3%) CPU |
11:30 |
Stompro |
In my testing, with our production data it had only a slight improvement. But it really improved the run that was stacked with bibs with lots of copies. |
11:36 |
Stompro |
acps_for_bre needs to be reworked to improve the --items performance in general. Maybe the first call just pulls in all call numbers and copies and caches them in a hash... |
11:37 |
Stompro |
Or go with the rust version that is already better :-) |
11:43 |
Dyrcona |
When I ran the queries through EXPLAIN ANALYZE, none of them were particularly slow. The slowest was the acp_for_bres query. On one particular record, it spent 40ms on a seq scan of copy.cp_cn_idx. I'm not sure how to improve a seq scan on an index, unless it can be coerced to a heap scan somehow. |
12:34 |
|
collum joined #evergreen |
12:38 |
Dyrcona |
Heh. Almost 1 minute longer..... |
12:50 |
|
collum joined #evergreen |
13:02 |
Dyrcona |
I am testing this now: time marc_export --all -e UTF-8 --items > all.mrc 2>all.err |
13:14 |
Dyrcona |
The Rust marc export does batching of the queries by adding limit and offset. I wonder if we should do the same? I've noticed that the CPU usage goes up over time, which implies that something is struggling through the records. The memory use stays mostly constant once all of the records are retrieved from the database. |
13:20 |
Stompro |
Dyrcona, if you use gnu time, it gets you max memory usage also. /usr/bin/time -v... so you don't have to check that separately. |
13:25 |
Stompro |
Dyrcona, I'm surprised the execution time increased for you... hmm. |
14:00 |
csharp_ |
berick: after installing the default concerto set, notes work - everything is speedy under redis - no errors yet |
14:07 |
|
sleary joined #evergreen |
14:16 |
berick |
csharp_: woohoo |
14:35 |
Dyrcona |
I've been testing Redis with production data, but not much lately. I need to write an email to ask the relevant staff here if they'd like me to update the dev/training server to use Redis on the backend. |
14:37 |
Dyrcona |
Current calculation puts it at only 20% faster, i.e. -1 day. |
14:38 |
Dyrcona |
I'm going to add a --batch-size option. If it is specified the main query will retrieve that number of records per request. I don't know if I'll get that implemented today. |
14:51 |
Dyrcona |
Looks like adding tags isn't my bottleneck My current estimate is minimal difference in performance. I'm going to let this run over the weekend to see if I'm wrong. On Monday, I'll add a batching/paging option to the main query and see if that helps. |
09:08 |
Dyrcona |
berick: I did `cargo build --release --package evergreen` then copied eg-marc-export to /openils/bin/. I missed the password on of the two lines for eg-marc-export in my script, so I don't know if it is faster, but the binary is certainly smaller without the debugging symbols, etc. |
09:14 |
|
redavis joined #evergreen |
09:18 |
|
terranm joined #evergreen |
09:18 |
Dyrcona |
FWIW, I haven't used --release on my test system. I did that for the production server. |
09:40 |
csharp_ |
one of the Supy/Limnoria bots in another IRC channel I was in had a @botsnack command and the bot would reply "YUM!" |
09:41 |
csharp_ |
actually in the Ubuntu channel, it would say "YUM!.. I mean, er, APT!" |
09:41 |
bshum |
Put that on the wishlist for bot development tasks... got it :) |
15:18 |
kmlussier1 |
I don't know what the harlequin/silhouette subscription thing was, but I was just talking fondly this week about my Columbia House mail order subscription. Those were good times! |
15:19 |
kmlussier1 |
Oh, my nick is messed up. How sad. Anyway, have a good weekend #evergreen! Safe travels to all who will be going to Indianapolis! |
15:19 |
|
kmlussier1 left #evergreen |
15:21 |
terranm |
The true Gen Xer test is the exposure to the Flowers in the Attic books |
15:21 |
terranm |
Those were passed around like contraband in my junior high |
15:22 |
redavis |
Oh, full exposure here. I was just thinking about those the other day and how wrong it all was. |
15:23 |
redavis |
hmm, I'm not sure how I got ahold of them. Kinda thing my mom might have accidentally bought them at a garage sale or something like that. The paperbacks swirled. Same with Stephen King. |
15:24 |
terranm |
So much Stephen King! |
10:28 |
mantis1 |
was missing one of those thank yo! |
10:32 |
jeff |
recall holds are a thing that requires some additional A/T setup. I don't know if the defaults are suitable / functional out of the box. Org unit settings likely also. |
10:33 |
jeff |
I was actually just wondering if it made sense to have an option to hide recall as a hold option, for those reasons and more. :-) |
10:39 |
Dyrcona |
berick: My test of the Rust marc export finish in 9 hours 23 minutes, and exported a 3.4GB binary file with 1,737,349 records. There are 411 records in the error output. That looks good to me, compared to what I'm getting from Perl. |
10:46 |
csharp_ |
rs++ |
10:57 |
Dyrcona |
csharp_: Do you think the marc_export is slow with the --items option? |
10:57 |
|
briank joined #evergreen |
10:59 |
berick |
Dyrcona: cool, good to know. |
11:00 |
berick |
re: errors, I saw a record or 2 in my tests that had subfield codes of "" (zero bytes) |
11:00 |
berick |
could add a flag or something to massage those into " " or some such |
11:01 |
csharp_ |
Dyrcona: yes it is |
11:01 |
Dyrcona |
I see some of those. I'll look through it later. I know we have bad records, because the Perl also complains. |
11:02 |
Dyrcona |
csharp_: Thanks. I felt like I'm going crazy because others have said it's not that slow. It was taking 5 days to the same export as I mentioned above. |
11:19 |
kmlussier |
Dyrcona: Wow, that's great that you were able to see such a big improvement! |
11:21 |
Dyrcona |
kmlussier: Yeah, I think Rust is more efficient than Perl, at least in my situation. I'm doing a test in produciton this evening. |
11:27 |
Dyrcona |
Y'know....It would be cool in psql if you could use \i for a subquery or in a CTE. That would save having two copies of the same code. (I know.... "Find another way to do it.") |
11:56 |
Dyrcona |
@decide Redgum or Rimu |
11:56 |
pinesol |
Dyrcona: go with Redgum |
15:28 |
sharpsie |
Burn It Down and Rebuild™ |
15:32 |
|
book` joined #evergreen |
15:40 |
JBoyer |
Sometimes a forest fire is responsible forest management! :p |
15:43 |
jeff |
okay, about to bug this unless it rings a bell for someone and they point out it's already got a bug: Angular MARC editor stashes field data in current_tag_table_marc21_biblio in a format that's just different enough to break context menus for the AngularJS MARC editor (which you may encounter in at least the Z39.50 Edit then Import interface). |
15:44 |
jeff |
for the Angular MARC editor, the way it creates the local data structure results in an array with element 100 being tag 100, etc. The older AngularJS MARC editor created an object with key 100, not array index 100, etc. |
15:45 |
jeff |
and of course, it's technically the tagtable service (old and new) that are doing this, not the "MARC editor"... |
15:47 |
jeff |
looks like it affects 3.10 and 3.11 at least, haven't tested main or looked too closely to see if the Z39.50 Edit then Import has moved to Angular yet. |
15:52 |
jeff |
steps to reproduce: clear (at least) current_tag_table_marc21_biblio from localStorage, then open the Angular MARC editor followed by Edit then Create from a Z39.50 search (which uses the older AngularJS MARC editor), and try to open the context menu on a tag value (other than the leader). |
15:52 |
jeff |
If you do the reverse, it works okay (the newer editor can tolerate the data in both formats, possibly by design, possibly by happenstance). |
15:53 |
jeff |
(or happy accident?) :shrug: :-) |
16:16 |
sleary |
Z39.50 in Angular and a revised MARC editor that relies less on context menus are both under construction now |
16:19 |
jeff |
the latter is bug 2006969, I think. |
16:19 |
pinesol |
Launchpad bug 2006969 in Evergreen "wishlist: enhanced MARC editor rewrite" [Wishlist,Confirmed] https://launchpad.net/bugs/2006969 - Assigned to Stephanie Leary (stephanieleary) |
17:01 |
jeff |
sleary++ jihpringle++ |
17:02 |
jeff |
(I'll still create a bug so we can fix 3.10/3.11) |
17:05 |
|
mmorgan left #evergreen |
17:07 |
abneiman |
jeff: I am not finding an LP for sprint A (which is on me, and I will rectify shortly) - but the spec you linked is the one we're working from. It's opening testing this week, and MARC Editor just opened testing so look for both on the spring release roadmap! |
17:08 |
abneiman |
also, apologies in advance for the flood of commits about to ensue ... next time I will learn to squash |
17:11 |
pinesol |
News from commits: Docs: LP2022100 updates to Item Status docs <https://git.evergreen-ils.org/?p=Evergreen.git;a=commitdiff;h=e36ddd543aba55fd6ce4aeb5dd968b6f26eec4a4> |
17:11 |
pinesol |
News from commits: Docs: LP2022100 updates to Item Status docs <https://git.evergreen-ils.org/?p=Evergreen.git;a=commitdiff;h=483d2f78eda53393c02a05eb54f5b91baf503c26> |
17:11 |
pinesol |
News from commits: Docs: LP2022100 updates to Item Status docs <https://git.evergreen-ils.org/?p=Evergreen.git;a=commitdiff;h=c2ce3c25973f9a2128970cef0e1b2e0a85f1a022> |
10:41 |
redavis |
Thank you for your many hours of coffee roasting sacrifice. |
10:47 |
|
smayo joined #evergreen |
10:57 |
|
briank joined #evergreen |
11:00 |
Dyrcona |
berick: I don't know what the size of the file was at 143,000 records. The final file is 15GB with 2,251,864 bib records and 7,775,264 items. It took 18.3 hours to complete on my test system. |
11:00 |
Dyrcona |
berick++ |
11:04 |
Dyrcona |
Hmm. I guess that export running on production explains the "processor load too high on utility server emails." It keeps 1 CPU busy all the time, and there are only 8 cpus. Load was 11.5 when I just checked. |
11:04 |
Dyrcona |
Bmagic: I'm installing Rust on the CW MARS utility server today. |
14:53 |
Dyrcona |
Currently, it's actually returning acn.record, bre.marc, and the count on acp.id. |
14:56 |
Dyrcona |
Dude..... I just noticed the file ends with two semicolons.....I'll bet that's it. |
14:56 |
Dyrcona |
Still, I think I'll do the CTE. |
14:57 |
berick |
my test file contain: SELECT id, marc FROM biblio.record_entry WHERE NOT deleted (no semicolons needed) |
14:58 |
berick |
afk for a bit |
14:59 |
Dyrcona |
Yeah, ; is a habit from writing stuff for psql. |
15:07 |
pinesol |
News from commits: LP2035287: Update selectors in e2e tests <https://git.evergreen-ils.org/?p=Evergreen.git;a=commitdiff;h=f562b3ac30a3753d63d565c2d7be4d3a7121a2fb> |
15:11 |
Dyrcona |
Now, I'm cooking with gas. I got the "large" bibs file (85 records with 87,296 items) dumped to XML in under a minute. That took almost 2 hours with the Perl program the other day. |
15:18 |
Dyrcona |
Using query-file to feed the eg-marc-export, it is using a lot more RAM than before, about the same as the Perl export was using. It still uses less CPU. We'll see if that changes over time. |
15:28 |
jeff |
you have 87,296 items that are spread across only 85 bibs? |
15:55 |
|
brianmk joined #evergreen |
15:56 |
berick |
Dyrcona: --pipe option pushed. i'm also happy to see any errors you encounter. |
15:57 |
Dyrcona |
berick++ |
15:57 |
Stompro |
Do the test email / sms features require that the app servers be able to send email? |
15:57 |
Dyrcona |
I might work up the nerve to submit some PRs later. |
15:58 |
Dyrcona |
Stompro: I don't remember, but there is something that does. |
15:59 |
Stompro |
I'm so used to only our utility server sending out email... |
16:01 |
Dyrcona |
Stompro: I think that's bug worthy if that's the case. |
16:02 |
Stompro |
I think I've been under the incorrect impression that action_trigger_runner had to be called to send email.... |
16:07 |
Dyrcona |
I could be wrong, but I swear I remember there being something that required the app servers to send email. Test emails and SMS go through action trigger. |
16:08 |
berick |
Stompro: open-ils.actor.event.test_notification ? looks like that fires the A/T event in real time from whatever server the API is called at. |
16:08 |
berick |
it doesn't wait for a_t_runner |
16:09 |
berick |
a variation on that API that creates but does not fire the event would be useful |
16:10 |
Dyrcona |
berick++ |
16:13 |
Stompro |
Hmm, my flowroute sms reactor is only setup on our utility server... I'll have to figure out how to get those test messages working. |
16:14 |
Stompro |
There probably is no reason it couldn't be on the app servers also. |
16:16 |
Dyrcona |
Stompro: I think a setting could be added to make the test notification API NOT fire the event, just leave it pending, then the pending a/t runner could pick it up, or it could be given the same granularity and the password reset emails. |
16:16 |
Dyrcona |
It would require development, of course. |
16:16 |
Stompro |
I was just stumped when I looked at the event def 155 entries and saw that they were processed 1 second after they were created. |
16:17 |
Dyrcona |
Yeah, in my case they're 523 and 524... |
10:40 |
mmorgan |
Stompro: extend_reporter.legacy_circ_count |
10:40 |
Stompro |
mmorgan++ thank you!! |
10:41 |
mmorgan |
YW! |
10:43 |
Stompro |
Dyrcona, I was conversing with Brian about his marc_export issues. And I did a test export or our DB. 200K bibs took 5 minutes, used 1.2GB of RAM, and created a 1GB uncompressed xml file, which compressed to 115MB. |
10:43 |
Dyrcona |
user_ingest_name_keywords_tgr BEFORE INSERT OR UPDATE ON actor.usr FOR EACH ROW EXECUTE PROCEDURE actor.user_ingest_name_keywords() <- I think that should maybe be an AFTER, but I'll play with it. |
10:45 |
Stompro |
That was without items... I should try it again with items. |
10:45 |
Dyrcona |
Stompro: I still have to do this in production, but it has always taken longer than that. Are you feeding IDs to marc_export, and what marc_export options are you using? |
10:48 |
|
briank joined #evergreen |
10:49 |
Dyrcona |
Also, I realize my comment about the user_ingest_name_keywords_tgr needing to be AFTER is bogus. I pulled an extra couple of fields in my query and discovered why I was seeing what I thought was anomalous. |
10:50 |
|
kworstell_isl joined #evergreen |
10:51 |
Dyrcona |
Some test accounts have weird names. :) |
10:52 |
Dyrcona |
The problem could be the extra query to grab item data combined with the massive amount of data. |
10:54 |
Stompro |
Dyrcona, --items didn't change the memory usage, still 1.2G for 194432 bibs.. run time seems longer.. I'll report back when done. |
10:56 |
Dyrcona |
I think running that select for items in a loop is the real issue. I should refactor this to grab the items at the time the query runs. That complicates the main loop though. |
11:13 |
Dyrcona |
I estimate it will export about 1.78 million records. |
11:14 |
Stompro |
Dyrcona, Nevermind about the cpu usage, that was me seeing the 9% memory used by mistake. |
11:15 |
Stompro |
Using your method I see 67%cpu |
11:16 |
Dyrcona |
I'm going to try smaller batches in production starting this afternoon to see if it helps. I may or may not stop the one running on a test vm. Maybe it is time to refactor export to speed things up? |
11:24 |
pinesol |
News from commits: Docs: LP1845957 Permissions List with Descriptions <https://git.evergreen-ils.org/?p=Evergreen.git;a=commitdiff;h=2680eca9e4dbaa79b2cd00c7fd3373311b85901c> |
11:24 |
pinesol |
News from commits: Docs: LP1845957 Part 1 - Update describing_your_people.adoc <https://git.evergreen-ils.org/?p=Evergreen.git;a=commitdiff;h=c7b205fb604e7827843c7a5ea6542ce02c2f72ef> |
11:25 |
Dyrcona |
I think I'll break for lunch early and start on that about noon. |
12:49 |
eeevil |
smayo: I do not recall ... it's been A While (TM) ;) ... I can look, though |
12:51 |
Dyrcona |
hmm... marc_export should exit when a bad command line option is passed in. |
12:56 |
Dyrcona |
It's taking a while to export the "large" bibs. It has to be the items query that is slowing things down. These are 90 bibs in our database with > 500 items. |
12:57 |
eeevil |
@later tell smayo I do not recall ... it's been A While (TM) ;) ... I can look, though. UPDATE: looks like it was a "just check a week" flag, basically, though the breakout variable is similar (if 15x larger). if skipping dow_count testing makes everything happy, I'm for it. |
12:57 |
pinesol |
eeevil: The operation succeeded. |
12:58 |
|
smayo joined #evergreen |
12:58 |
Dyrcona |
:) |
13:10 |
Dyrcona |
18 minutes and 54 seconds and only 13 records exported.... Well, I know where I need to look. |
13:34 |
|
smayo joined #evergreen |
13:48 |
|
smayo joined #evergreen |
13:50 |
Dyrcona |
58 minutes and it is a bit over halfway done with the 90 large records. I have a 'debug' version of marc_export that I'll use to dump the queries on a test system. |
13:56 |
|
jihpringle joined #evergreen |
14:24 |
pinesol |
News from commits: Docs: updates to Z39.50 documentation <https://git.evergreen-ils.org/?p=Evergreen.git;a=commitdiff;h=bb4d795eb16102c35d26f0d59d14da50b86605e4> |
14:54 |
Dyrcona |
Doing batches does not appear to have improved performance. If anything, it seems worse, but maybe my dev database is faster than production. |
15:11 |
berick |
./eg-marc-export --items --to-xml --out-file /tmp/recs.xml # --help also works / other options to limit the data set |
15:12 |
Dyrcona |
berick++ I'll give it a look. |
15:12 |
berick |
Dyrcona++ |
15:21 |
jeffdavis |
Interesting performance problem on our test servers - the Items Out tab is very slow to load. Looks like the call to open-ils.pcrud.search.circ is consistently taking 6.5 seconds per item for some reason. |
15:21 |
berick |
oof |
15:22 |
jeff |
do the items in question have extremely high circulation counts? |
15:23 |
jeff |
we have a test item that gets checked in and out multiple times a day via SIP2 for a Nagios/Zabbix style check. |
15:23 |
jeff |
I once made the mistake of using that same item to test something unrelated, and it took consistently 6 seconds or more to retrieve in item status. |
15:24 |
jeffdavis |
No, these are just randomly selected items - the one I checked has 8 total circs. |
15:24 |
jeff |
(It may not apply in this case. I don't think the pcrud search is going to be trying to get a total circ count for the items.) |
15:24 |
jeff |
ah, drat. |
15:25 |
jeffdavis |
Our production environment is not affected, but a test server running the same version of Evergreen (3.9) is, as is a different test server running 3.11. |
15:25 |
|
mmorgan1 joined #evergreen |
15:28 |
Stompro |
jeffdavis, are they all on the same Postgres version? |
15:29 |
jeffdavis |
Yes, all PG14. The test servers all share the same Postgres server, my guess is that's where the issue lies but not sure what the mechanism would be. |
15:30 |
Dyrcona |
jeffdavis: You have all of the latest patches for Pg installed? |
15:30 |
Dyrcona |
Meaning Evergreen patches. |
15:32 |
jeffdavis |
The affected servers are either 3.9.1-ish or 3.11.1-ish with some additional backports -- not fully up to date but pretty close. Are there specific recent patches you're thinking of? |
15:51 |
berick |
IOW, evergreen-universe-rs does a variety of stuff, but when it talks to EG, it assumes Redis is the communication layer |
15:51 |
berick |
ah, no, redis required for those actions |
15:55 |
pinesol |
News from commits: Docs: Circulation Patron Record Page <https://git.evergreen-ils.org/?p=Evergreen.git;a=commitdiff;h=9d632b3589a263333a187fda59a708fe672f2813> |
15:56 |
Dyrcona |
If I'm going to start messing with Rust, I guess I should dust off the VM where I tested the RedisRF branches. |
15:56 |
berick |
muahaha i have successfully distracted you :) |
15:57 |
Dyrcona |
:) |
16:01 |
Dyrcona |
If the Rust export is faster, then I won't consider it a distraction. :) |
11:05 |
Dyrcona |
It might need more patches than just that one.... I'll leave it for now. |
11:14 |
|
kmlussier joined #evergreen |
11:18 |
Dyrcona |
So, going back to yesterday's conversation about MARC export, I wonder if that commit really was the problem. I reverted that one and two others, then started a new export. It has been running for almost 21 hours and only exported about 340,000 records. I estimate it should export about 1.7 million. |
11:20 |
Dyrcona |
At that rate, it will still take roughly 5 days to export them all. This is one a test system, but it's an old production database server and it's "configured." The hardware is no slouch. I guess I will have to dump queries and run them through EXPLAIN. |
11:28 |
Dyrcona |
Y'know what. I think I'll stop this export, back out the entire feature and go again. |
11:29 |
jeff |
if it's similar behavior as yesterday and most of the resource usage appears to be marc_export using CPU, I'd suspect inefficiency in the MARC record manipulation or in dealing with the relatively large amount of data in memory from the use of fetchall_ on such a large dataset. |
11:29 |
jeff |
even if it's not swapping, dealing with that large a data structure might be giving Perl / DBI a challenge. |
11:30 |
jeff |
(both of those are just guesses, though. I don't have time this week to experiment to test the theories.) :-) |
11:30 |
Dyrcona |
jeff: That might be it. Maybe it's actually the other patch that I pushed yesterday to manipulate the 852? |
11:30 |
jeff |
I like your idea of next step, though. Especially if you've exported a set this large before without issue. |
11:31 |
Dyrcona |
Well, "without issue" is up for debate.... |
12:07 |
pinesol |
Launchpad bug 1788680 in Evergreen 3.3 "Null statcats break copy templates" [High,Fix released] https://launchpad.net/bugs/1788680 |
12:07 |
|
jihpringle joined #evergreen |
12:08 |
Dyrcona |
We're on 3.7, and I think it stores fleshed JSON objects. The Angular client looks like it only stores stat cat ids. |
12:08 |
jeff |
I may resort to empirical testing. Copy affected JSON to a 3.10 system and re-test there. :-) |
12:08 |
Dyrcona |
I didn't make a not of the line numbers. I didn't think I'd want to look at it again. |
12:09 |
Dyrcona |
On 3.7, it looks like it includes everything in the template including alerts. I seem to recall in master, it ONLY looks at stat cats an only puts in the ID. This may come up again on Friday, so I think I'll have another look. |
12:14 |
Dyrcona |
I should have made notes. |
12:40 |
jihpringle |
the new alert messages have also caused some funky template issues - https://bugs.launchpad.net/evergreen/+bug/2022349 |
12:40 |
pinesol |
Launchpad bug 2022349 in Evergreen "Remove old-style copy/item alerts" [Medium,Confirmed] |
12:56 |
Dyrcona |
Everyone have a good rest of your day (or night)! I'm taking off early. |
13:11 |
mmorgan |
jeff: I'm familiar with 1788680, but have seen other issues with stat cats in templates, too. |
13:11 |
mmorgan |
Mostly templates trying to apply stat cat entries that no longer exist. |
13:11 |
|
collum joined #evergreen |
13:13 |
mmorgan |
I think I've seen the -1 once or twice, too. Maybe that was an attempt to remove the stat cat value? |
13:13 |
|
jihpringle joined #evergreen |
13:17 |
mmorgan |
We're not yet using the angular holdings editor, but in testing, are becoming aware of things lurking in templates that get applied and saved in items in Angular, but were ignored in angularjs. Like the copy alerts in 2022349 |
13:28 |
|
redavis joined #evergreen |
13:32 |
eby |
https://docs.paradedb.com/blog/introducing_bm25 |
13:34 |
jeff |
I tested one of the "statcats":{"24":-1} templates on a 3.10 system and while it did set the stat cat in question to "unset" (likely the original template's intent/origin), it did so in a way that was successful, and allowed the item to be saved. On 3.7, the behavior was that the stat cat was invalid and failed to save. |
13:36 |
jeff |
eby: saw that ("pg_bm25: Elastic-Quality Full Text Search Inside Postgres"). Also saw some comments at https://news.ycombinator.com/item?id=37809126 from a few days ago that I bookmarked for later. |
13:37 |
jeff |
(the "24" in my template fragment isn't significant, it's just the ID of a copy stat cat here) |
13:56 |
|
mantis1 left #evergreen |
14:24 |
berick |
eby: neat. |
14:24 |
berick |
also neat https://github.com/quickwit-oss/tantivy |
10:17 |
jeff |
That's what, 3.2 records per second? That seems exceptionally slow. Any ideas/suspects, or are you just noticing and looking into it? |
10:22 |
jeff |
5.9 kb average record size seems reasonable, especially with holdings data added. Our non-deleted bibs (excluding a bunch of deleted stubby ILL bibs) are about 4.5 kb on average. |
10:23 |
jeff |
anyway, if you had something looping or some kind of unintentionally additive loop/array, i'd expect you'd have a lot more than 7.7 GB of output. |
10:26 |
Dyrcona |
jeff: I'm not really looking into it. I'm testing some patches on a test system, and also doing a test run for a big Aspen export. I thought I'd just run marc_export like this and time it: /usr/bin/perl /openils/bin/marc_export --all --items --exclude-hidden --852b circ_lib --format XML |
10:27 |
Dyrcona |
`ps -o etime 24586` said 4-19:00:01 just a few seconds ago. |
10:28 |
Dyrcona |
I'm running it with time, but was curious how long it has been going so far. |
10:29 |
Dyrcona |
Adding --items seems to really slow it down on this setup. |
10:50 |
Dyrcona |
I'd like to run it through explain. It's probably the queries to grab item information to add to the MARC, so I'll have to dump an example of that, too. |
10:52 |
Dyrcona |
Guess I will be looking into it later.... *sigh* |
10:52 |
jeff |
or tweak log_min_duration_statement just long enough to capture some sample queries. depends on how otherwise loaded your db server is, if this is prod. |
10:56 |
Dyrcona |
This is a test system that hosts multiple databases, but this is the only instance currently doing anything. |
10:56 |
jeff |
ah, that makes many things easier. |
10:56 |
Dyrcona |
I wonder if the number of page faults in marc_export matters.... |
10:57 |
jeff |
if you're trying to hold that whole resultset in memory... I haven't looked at how marc_export does paging, if at all. |
11:03 |
Dyrcona |
It's using 98.2% CPU, which nothing because the VM is configured with 16. The hardware has 32 with HTT enabled. |
11:03 |
Dyrcona |
It's not running on the same server as the DB either. |
11:06 |
Dyrcona |
I'll have to do some investigation to see where the problem lies. Maybe I can get some improvements for everyone out of this. |
11:20 |
Dyrcona |
jeff++ # I may just crank the logging up for a test run later. I suspect this one will finish sometime later today, but I also thought that it would have done by yesterday to start with. |
11:24 |
Dyrcona |
FWIW, I'm dumping XML because it's "easier" to work with than binary MARC, but when a file is about 8GB in size, the format doesn't really matter any longer, does it? :) |
11:25 |
|
collum joined #evergreen |
11:33 |
|
kmlussier joined #evergreen |
11:39 |
|
jihpringle joined #evergreen |
13:00 |
Dyrcona |
Y'know. I think I'll also do a similar extract with the --items option and time that. I also want to check what sandbergja reported on Lp 2015484. |
13:00 |
pinesol |
Launchpad bug 2015484 in Evergreen "marc_export: provide way to exclude OPAC-suppressed items" [Wishlist,Confirmed] https://launchpad.net/bugs/2015484 - Assigned to Jason Stephenson (jstephenson) |
13:41 |
Dyrcona |
jeff: I think some of the patches that I am testing are responsible for the slow down, particularly the one for the above Lp bug. |
13:45 |
Dyrcona |
I think I'll revert a couple of commits before I say much more. |
14:21 |
Dyrcona |
Hmm... Looks like I have somewhere in the vicinity of 400,000 records left to export. I think I'll stop this one and try again with the suspected commits reverted. |
14:25 |
Dyrcona |
Think I'll export to a binary MARC file this time. At least the file will be smaller. |
15:18 |
Bmagic |
#info Pullrequest tag Added - 19 |
15:18 |
Bmagic |
#info Signedoff tag Added - 8 |
15:18 |
Bmagic |
#info Fix Committed - 12 |
15:18 |
Bmagic |
#topic New Business - Should we interfile nightwatch tests into the typical eg2/src/app directory (bug 2036831)? - Jane |
15:18 |
pinesol |
Launchpad bug 2036831 in Evergreen "Move nightwatch tests into the same directory as the code they test" [Wishlist,Confirmed] https://launchpad.net/bugs/2036831 |
15:18 |
mmorgan |
Lots to test, lots to commit :) |
15:18 |
Bmagic |
#link https://bugs.launchpad.net/evergreen/+bug/2036831 |
15:19 |
sandbergja |
Ooh, This came up in the newdevs group! Especially from sleary, so please jump in if I get something wrong :-) |
15:19 |
Bmagic |
mmorgan++ # thanks for doing the stats! it's nice to see them like that |
15:20 |
sandbergja |
the idea was to reduce some of the maintenance burden of the nightwatch tests by making it clear when a particular UI has some nightwatch tests that exercise it |
15:20 |
sandbergja |
And one easy way to do that is to just house them in the same directories |
15:20 |
sandbergja |
like we already do with the unit tests |
15:20 |
sleary |
ah yes! I am 100% on board with doing more Nightwatch tests, but (as I am also rearranging our markup on a daily basis) I want to make sure it's easy to find the test that corresponds to the file you're looking at |
15:20 |
sandbergja |
So what do you think? |
15:21 |
Bmagic |
I've been confused by this before, but I figured that there was a reason for it that predates me |
15:21 |
berick |
any concerns about moving them? |
15:21 |
berick |
certainly makes sense |
15:22 |
berick |
having the .html and .ts files side by side has been a huge help |
15:22 |
berick |
just as an example |
15:22 |
sleary |
sandbergja: it's relatively easy to change the command to run all the tests from a directory to a filename pattern, right? |
15:22 |
terranm |
It will also be easier to see if there are things that should have Nightwatch tests that do not |
15:22 |
Bmagic |
I'm all about file folder organization, and it makes sense to me to have related things together. |
15:23 |
shulabear |
+1 to having tests in the same directory as the code being tested |
15:23 |
sandbergja |
sleary: theoretically, yes, it should be just changing the config file. I haven't actually tried it though |
15:23 |
sleary |
sandbergja++ |
15:24 |
sandbergja |
Cool! I'm not hearing any objections. Anybody interested in taking this one on? |
15:24 |
Bmagic |
do we vote maybe? |
15:24 |
sandbergja |
ooh, vote sounds fun |
15:25 |
Bmagic |
hah, lets see if I can pull that off |
15:26 |
Bmagic |
#startvote Shall we move the tests to related places in the Evergreen codebase yes, or no |
15:26 |
pinesol |
Unable to parse vote topic and options. |
15:26 |
Bmagic |
lol |
15:27 |
Bmagic |
#startvote Shall we move the tests to related places in the Evergreen codebase? yes, no |
15:27 |
pinesol |
Begin voting on: Shall we move the tests to related places in the Evergreen codebase? Valid vote options are yes, no. |
15:27 |
pinesol |
Vote using '#vote OPTION'. Only your last vote counts. |
15:27 |
Bmagic |
#vote yes |
15:27 |
shulabear |
#vote yes |
15:27 |
Dyrcona |
#vote yes |
15:27 |
pinesol |
yes (6): shulabear, sandbergja, mmorgan, Bmagic, Dyrcona, collum |
15:27 |
Bmagic |
voting is still open |
15:27 |
Bmagic |
#endvote |
15:27 |
pinesol |
Voted on "Shall we move the tests to related places in the Evergreen codebase?" Results are |
15:27 |
pinesol |
yes (6): shulabear, sandbergja, mmorgan, Bmagic, Dyrcona, collum |
15:28 |
Bmagic |
that was fun |
15:28 |
berick |
heh, "the tests" to "the places". we're still talking about nightwatch yes? |
15:28 |
terranm |
Sorry, "yes" - heh |
15:28 |
dluch |
lol |
15:28 |
berick |
#vote yes |
15:35 |
sandbergja |
if not, I'm happy to be on hand with any questions you have shulabear |
15:35 |
sandbergja |
^on hand to answer |
15:36 |
shulabear |
Assuming my girlfriend's surgical consult that morning doesn't run long, I should be free |
15:36 |
Bmagic |
#action sandbergja will go over the Nightwatch test reorg with folks at the Monday at 2pm ET meeting or another time as available |
15:36 |
shulabear |
Put it on my calendar |
15:36 |
shulabear |
Sandbergja++ |
15:37 |
Bmagic |
sandbergja++ |
15:39 |
dluch |
mmorgan++ |
15:39 |
Bmagic |
#info Hack-a-way is coming up! |
15:39 |
Bmagic |
#link https://wiki.evergreen-ils.org/doku.php?id=hack-a-way:hack-a-way-2023 |
15:39 |
sandbergja |
it occured to me that we should probably merge bug 2035287 before moving the nightwatch tests. Make them work first before messing with them hahaha |
15:39 |
pinesol |
Launchpad bug 2035287 in Evergreen "e2e tests are failing" [Undecided,New] https://launchpad.net/bugs/2035287 |
15:39 |
Bmagic |
mmorgan++ |
15:41 |
|
jihpringle joined #evergreen |
15:41 |
Bmagic |
thanks sandbergja, that looks like a good one to take care of |