Time |
Nick |
Message |
07:43 |
|
kworstell-isl joined #evergreen |
08:05 |
|
BDorsey joined #evergreen |
08:06 |
|
Rogan joined #evergreen |
08:40 |
|
mmorgan joined #evergreen |
09:01 |
|
kworstell_isl joined #evergreen |
09:03 |
|
Dyrcona joined #evergreen |
09:13 |
|
kworstell_isl_ joined #evergreen |
09:16 |
|
smayo joined #evergreen |
09:20 |
|
mmorgan1 joined #evergreen |
09:30 |
|
kworstell_isl joined #evergreen |
09:56 |
Dyrcona |
OK. So, parallel definitely speeds things up because it looks like a/t is still running on the vm where I disabled the parallel settings.It looks like it has finished on the other. |
09:56 |
Dyrcona |
Also, 1 of my cron jobs still does not appear to run. I have it redirecting to a file and that file isn't there. |
10:06 |
|
rfrasur joined #evergreen |
10:08 |
|
collum joined #evergreen |
10:08 |
Dyrcona |
At any rate, there was 1 "error" event with the multisession server last night, and so far none with the sequential server. |
10:33 |
|
gmcharlt joined #evergreen |
10:36 |
|
briank joined #evergreen |
10:49 |
|
terranm joined #evergreen |
11:10 |
Stompro |
Do the private and public opensrf domains need to have unique IP addresses? |
11:11 |
Stompro |
I have been using something like this in my /etc/hosts : 192.168.46.31 public-egapp1.larl private-egapp1.larl |
11:33 |
berick |
Stompro: they do not. i do something similar. 127.0.1.2 public.localhost private.localhost |
11:35 |
Stompro |
Thanks, I need to go back and look at JBoyer's slides about inter brick opensrf communication next. |
11:38 |
|
jihpringle joined #evergreen |
11:41 |
Dyrcona |
Just confirming that I've used the same address for both domains before. If you have drones running on a different host, you kind of have to. |
11:48 |
Dyrcona |
All right. It looks like 6 autorenew events are "stuck" in "reacting" state on the server that is doing sequential processing, so maybe multisession isn't the issue. Too soon to tell. |
11:59 |
Stompro |
Hmm, JBoyer's example shows using "public.app1.example.com" vs my "public-egapp1.larl" I wonder if I should be using subdomains like that. |
12:01 |
Dyrcona |
Dunno. I've never had drones shared across bricks. |
12:02 |
Dyrcona |
A "brick" being a "head" server running ejabberd and "drone" servers running services talking to ejabberd on the head. |
12:03 |
JBoyer |
DNS resolvability is all that matters, I just prefer typing dots to underscores. :) |
12:03 |
Dyrcona |
You could also try setting up ejabberd s2s connections, but not sure if that works with OpenSRF. |
12:03 |
JBoyer |
Dyrcona, it definitely does, that's how the cross-brick stuff gets around. |
12:03 |
* Dyrcona |
wasn't sure. |
12:04 |
JBoyer |
Probably more TLS things to turn off, I haven't set it up myself in a bit |
12:07 |
Dyrcona |
I don't think OpenSRF even cares about the public and private domain names as long as they're set and can be resolved. They could be the same, I think. |
12:11 |
jeff |
I wouldn't recommend making the public and private OpenSRF domain names be the same, unless you're very bored. |
12:12 |
berick |
heh |
12:12 |
Dyrcona |
I wouldn't recommend using public IPs for them. |
12:12 |
Dyrcona |
:) |
12:14 |
Stompro |
Thanks, JBoyer++ |
12:15 |
Stompro |
I'm trying out the suggestions in "Hosting Evergreen for Production.pdf" https://drive.google.com/file/d/1UP9wbQadRaPHqHkeItnaAyjEwk4AHaDX/view |
12:15 |
JBoyer |
Stompro++ glad you're exploring. I need to finally write up those proper docs.... |
12:19 |
Dyrcona |
Before migrating to MOBIUS and Docker, our setup looked the most like that on page 7. With 6 bricks. The services ran on two drone servers talking to a brick head. The head ran nginx, apache, ejaberd, the routers, settings, and something else that escapes me. |
12:19 |
Dyrcona |
I don't really know what the Docker set up looks like. :) |
12:20 |
Dyrcona |
I just have a notion that it's similar but different. ;) |
12:24 |
Dyrcona |
I suppose I could check out an old branch and look at the opensrf.xml.example. |
12:26 |
* Dyrcona |
growls at tramp-mode.... It seems to have gotten itself stuck in an error loop, or I don't know how to deal with the current error.... |
12:28 |
Dyrcona |
Yeah, tramp-cleanup commands aren't working, either... Guess I shutdown Emacs... |
12:29 |
Dyrcona |
...typo a hostname..... sheehs. |
12:37 |
|
collum joined #evergreen |
12:37 |
Dyrcona |
Hm.. I gues '%>' isn't working in a crontab, even if the shell is /bin/bash. |
12:37 |
Dyrcona |
Grr. '&>' NOT '%>'. |
12:38 |
Dyrcona |
It's Friday, right? |
12:38 |
Dyrcona |
Right, so drop the & and add 2>&1 to the end. |
12:41 |
Stompro |
JBoyer, on page 14 of that doc, you say "We also have to define our non-localhost ejabberd domains and register the |
12:41 |
Stompro |
appropriate "router" and opensrf" users for them as usual." Does that mean define them in /etc/hosts? Does /etc/ejabberd/ejabberd.yml need hosts: entries for the other bricks? |
12:43 |
berick |
Stompro: ejabberd.yml does not need hosts: entries for the other bricks. |
12:43 |
Stompro |
berick, thank you. |
12:43 |
berick |
yes to /etc/hosts or dns |
12:44 |
JBoyer |
If you're not using DNS then /etc/hosts has to have all of the ejabberd domains on all machines, but ejabberd.yml only defines the domain that this particular server uh, serves. |
12:44 |
JBoyer |
berick++ |
12:45 |
Dyrcona |
I'd set up /etc/hosts on 1 machine, run dnsmasq, and tell the others to use it for dns. Which is pretty much the standard libvirtd/qemu setup. |
12:46 |
Stompro |
I'm only planning on two or three app servers/bricks... so /etc/hosts seems like it is easy enough to manage for this. |
12:47 |
Dyrcona |
But the cool factor isn't there. :) |
12:48 |
Dyrcona |
I wonder if systemd-resolvd can do that? I know it can resolv for the localhost. Will it accept outside connections? |
12:49 |
* Dyrcona |
decides to save that exercise for later. ;) |
12:51 |
* Dyrcona |
remembers having "fun" with dnsmasq back in the day at MVLC. |
12:55 |
Dyrcona |
So, the overnight events finally finished at 12:16 pm on the machine running without parallel settings, so that's being undone for the remaining tests. |
12:55 |
Dyrcona |
We're likely going to replace our autorenewal template with something similar to the one Stompro++ shared yesterday. Then drop the courtesy notices. |
12:56 |
Dyrcona |
I'm going to test with the filter to drop circs with autorenewal from the courtesy notices to compare the difference between the two vms. They have the same data, so it should be a fair comparison. |
12:57 |
* Dyrcona |
poofs out to get some lunch. |
13:39 |
|
kworstell_isl joined #evergreen |
14:36 |
|
jihpringle joined #evergreen |
15:18 |
|
Dyrcona joined #evergreen |
15:41 |
Dyrcona |
A wild storm just went through. I lost power for a bit. |
15:44 |
JBoyer |
berick, is there a reason to prefer the 2010 edi_pusher.pl to your 2016 edi_order_pusher.pl script in some situations? I'm putting in an LP to just install the things automatically and noticed the docs only reference edi_pusher.pl. |
15:45 |
Dyrcona |
JBoyer: The older on is only needed if you still have a/t EDI going on, IIANM. |
15:45 |
Dyrcona |
s/on/one/ <- I been doing that a lot lately. |
15:46 |
JBoyer |
and the ruby stuff, or something *really* old? |
15:46 |
Dyrcona |
Ruby is only needed in that case, too. We're not using any of it at CWMARS. |
15:46 |
JBoyer |
Can't hurt to just install all 3 and let the sysadmin figure it out I suppose. |
15:46 |
JBoyer |
Dyrcona++ |
15:47 |
Dyrcona |
Well, I'm not so sure about "it can't hurt." Plus, the webrick gets harder and harder to install with each new distro release. |
15:48 |
* Dyrcona |
would like to see that all disappear, and I don't think that's the best way to go for new installations. |
15:48 |
Dyrcona |
New sites should be using the new order pusher IMO. |
15:51 |
JBoyer |
I also think the ruby stuff should be removed in time, though sadly I doubt now |
15:52 |
JBoyer |
s the time. |
15:52 |
JBoyer |
I'm thinking that some mention of the difference could go in the docs and the example crontab to prevent anyone from accidentally running both or the wrong one at least. |
15:55 |
Dyrcona |
Yeah. That part has never been well documented. |
16:00 |
jihpringle |
there's a bug for removing the webrick - but there are a few other bugs that are blocking it (that are referenced in that bug) https://bugs.launchpad.net/evergreen/+bug/1990288 |
16:00 |
pinesol |
Launchpad bug 1990288 in Evergreen "Remove the webrick" [Undecided,Confirmed] |
16:16 |
Dyrcona |
Well, webrick is technically deprecated already. It's not getting any patches, etc., from the community. |
16:37 |
jeff |
webrick the Ruby module does not appear to be deprecated, and is receiving new releases (as of Jan) and new commits (as of this week). I think the above conversation is using "webrick" and "the webrick" as a shorthand for "the Ruby/webrick-based EDI translator"? |
16:38 |
Dyrcona |
Well, yes, as far as Evergreen is concerned it is technically deprecated. |
17:04 |
|
mmorgan left #evergreen |
18:20 |
|
Guest32 joined #evergreen |