Vis innlegg

Denne delen lar deg se alle innlegg laget av dette medlemmet. Merk at du bare kan se innlegg gjort i områder du har tilgang til.


Emner - Floyd-ATC

Sider: 1 2 [3] 4 5 ... 12
31
I have found thousands of articles on the net explaining how to set up a network load balancer using keepalived and haproxy in CentOS 7, in a few easy steps. Unfortunately, and not really surprising, most of those articles are incomplete -- basically pieces of a bigger puzzle. Instead of promising a complete solution to all your problems, I will therefore instead try to explain how I solved the various problems I ran across, hoping that they might be useful to someone else and perhaps inspire someone to point out flaws in my own setup.

Step 1: Install "keepalived" and "haproxy"
Because both are in the standard repo, this is pretty straight-forward:
Kode: [Velg]
yum install -y keepalived haproxy
Step 2: Punch the necessary holes in the firewall
You could open one port at the time, but one article pointed out that it's better to collect everything in a simple XML file so you remember exactly what you opened for haproxy based services to work. Create a file called "/etc/firewalld/services/haproxy.xml" with the following for starters:
Kode: [Velg]
<?xml version="1.0" encoding="utf-8"?>
<service>
<short>HAProxy</short>
<description>HAProxy load-balancer</description>
<port protocol="tcp" port="80"/>
</service>
The cool thing about this is that not only can you use this file to punch holes in the firewall, it can also be used to make SElinux understand that haproxy is allowed to listen on those ports:
Kode: [Velg]
restorecon /etc/firewalld/services/haproxy.xmlThen open the firewall:
Kode: [Velg]
firewall-cmd --add-service haproxy --permanentWhile you're at it, don't forget that keepalived needs to send and receive VRRP messages. If not, both keepalive daemons will incorrectly assume that the partner is dead and they will both try to become MASTER:
Kode: [Velg]
firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 --destination 224.0.0.18 --protocol vrrp -j ACCEPT
firewall-cmd --direct --permanent --add-rule ipv4 filter OUTPUT 0 --destination 224.0.0.18 --protocol vrrp -j ACCEPT

Step 3: Configure keepalived
This daemon will be used for High Availability, you can use as many load balancers as you like and they all need a reachable IP address for management but only one of them will be MASTER at any time and respond to the Virtual IP address where the haproxy will listen. Here is an example "/etc/keepalived/keepalived.conf" file:
Kode: [Velg]
! Configuration File for keepalived

global_defs {
   notification_email {
     you@your_domain
   }
   notification_email_from keepalived@your_host.your_domain
   smtp_server ###.###.###.###
   smtp_connect_timeout 30
   router_id your_host.your_domain
}

vrrp_instance VI_1 {
    state MASTER
    interface ens160
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        ###.###.###.### label ens160:10
    }
}

Notice the "label ens160:10". By default, the Virtual IP address will not actually be visible when using "ifconfig" or other network tools, which can be very confusing. By adding a "label", keepalived will register a virtual network interface for no other purpose than to let "ipconfig" show the Virtual IP address. Just make sure the name conflict with other interfaces.

If you're concerned that multiple nodes become available at the same time and all try to become master, or perhaps you want a certain node to take preference, adjust the "priority" (higher value wins the election) and consider setting the initial state to BACKUP. As long as VRRP communication works, the nodes should be able to manage this perfectly on their own though.

One more thing, we need to adjust the startup parameters to keepalived. Do this by editing "/etc/sysconfig/keepalived":
Kode: [Velg]
# Options for keepalived. See `keepalived --help' output and keepalived(8) and
# keepalived.conf(5) man pages for a list of all options. Here are the most
# common ones :
#
# --vrrp               -P    Only run with VRRP subsystem.
# --check              -C    Only run with Health-checker subsystem.
# --dont-release-vrrp  -V    Dont remove VRRP VIPs & VROUTEs on daemon stop.
# --dont-release-ipvs  -I    Dont remove IPVS topology on daemon stop.
# --dump-conf          -d    Dump the configuration data.
# --log-detail         -D    Detailed log messages.
# --log-facility       -S    0-7 Set local syslog facility (default=LOG_DAEMON)
#
KEEPALIVED_OPTIONS="-D -P -S 1"

Step 4: Configure haproxy
This daemon is used to perform load balancing, spreading traffic across all of the backend servers.
Most of the articles I found presented a very short config for haproxy, which is pretty cool if you want to make it seem like haproxy takes very little effort to configure. Here's a more realistic "/etc/haproxy/haproxy.cfg" config file to start with:

Kode: [Velg]
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # Forward log messages to syslog (localhost is okay)
    log         ###.###.###.###:514 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend http *:80
    #acl url_static       path_beg       -i /static /images /javascript /stylesheets
    #acl url_static       path_end       -i .jpg .gif .png .css .js
    #use_backend static          if url_static

    default_backend             web

#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
#backend static
#    balance     roundrobin
#    server      static 127.0.0.1:4331 check

#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend web
    #balance     roundrobin
    balance leastconn
    server  web1 ###.###.###.###:80 check
    server  web2 ###.###.###.###:80 check
    server  web3 ###.###.###.###:80 check

Step 5: Make sure haproxy can bind to the Virtual IP address on BACKUP nodes
By default, the Linux kernel does not allow a process to listen on a non-existing IP address. This is a problem for us because that address will only exist on the MASTER node. Fortunately, there is a sysctl setting for this exact situation and CentOS 7 makes it very easy to change those. The only side-effect is that any service configured with an incorrect IP address will no longer generate an error, possibly making the error difficult to spot. Simply create a file "/etc/sysctl.d/haproxy.conf":
Kode: [Velg]
net.ipv4.ip_nonlocal_bind=1
Step 6: Configure rsyslog
We will create two drop files, one for each daemon. Here is an example "/etc/rsyslog.d/keepalived.conf":
Kode: [Velg]
$ModLoad imudp
$UDPServerRun 514
$template Keepalive,"%timegenerated% your_host %syslogtag% %msg%\n"
local1.* -/var/log/keepalived.log;Keepalive
### keep logs in localhost ##
local1.* ~

And here is an example "/etc/rsyslog.d/haproxy.conf":
Kode: [Velg]
$ModLoad imudp
$UDPServerRun 514
$template Haproxy,"%timegenerated% your_host %syslogtag% %msg%\n"
local2.=info -/var/log/haproxy.log;Haproxy
local2.notice -/var/log/haproxy-status.log;Haproxy
### keep logs in localhost ##
local2.* ~

Notice that I have included the actual host name in the templates. This is useful if you decide to forward messages to a remote syslog messages, where the default hostname "localhost" will be completely useless.

Step 7: Configure logrotate
We've all forgotten this at some point, haven't we?
Here's "/etc/logrotate.d/haproxy":
Kode: [Velg]
/var/log/haproxy.log /var/log/haproxy-status.log {
    missingok
    notifempty
    sharedscripts
    rotate 7
    daily
    compress
    postrotate
        reload rsyslog >/dev/null 2>&1 || true
    endscript
}

And here's "/etc/logrotate.d/keepalived":
Kode: [Velg]
/var/log/keepalived.log {
    missingok
    notifempty
    sharedscripts
    rotate 7
    daily
    compress
    postrotate
        reload rsyslog >/dev/null 2>&1 || true
    endscript
}

Step 8: Restart all relevant services
You should now be able to restart everything, either one service at a time or simply by rebooting.
Kode: [Velg]
sysctl --system
firewall-cmd --reload
systemctl restart rsyslogd
systemctl start keepalived
systemctl start haproxy

Really? That's it?
Now, one last problem caused some head-scratching, and it wasn't actually a problem with the Linux setup. I'm using a Juniper SRX router/firewall which by default ignores Gratuitous ARP because while it does have legitimate uses (like, say, a VRRP host announcing that it has taken over a Virtual IP address) it can also be used for malicious things like man-in-the-middle attacks. Assuming your servers are on a network that you control, this shouldn't be an issue. (Or rather, if your servers are on a network where this might be an issue, you might want to ignore load balancing for now and solve that problem first) But I digress. The end result of having a router ignore GARP is that the router will merrily keep forwarding packets to your (presumably dead) load balancer and not the new MASTER. The solution for Juniper SRX is quite simple, just set "gratuitous-arp-reply" on the interface(s) facing your server network(s). Check your router docs if you see the same symptom.

Anything else?
Is anyone able to write an article that doesn't miss a single thing? I don't think I ever read one, and I know for a fact I never wrote one. Check your log files, test the service and start troubleshooting. Then test again. Reboot. Test more. Feel free to comment below :-)

32
ATC-Pjatt / Reklame???
« på: 24. August 2015, 17:22 pm »
Jeg har oppgradert forumet til SMF2 for å få inn reCAPTCHA og bekjempe spam, hvis det fortsetter å funke så bra som det ser ut kan det hende vi kan fjerne "godkjenning" av nye brukere sånn at det er lettere for folk å registrere seg.

En annen ting jeg eksperimenterer med er reklamebannere, som dere sikkert har lagt merke til. Foreløpig er de tomme men det endrer seg forhåpentligvis i løpet av få dager. Trafikken på forumet er neppe nok til å finansiere mer enn en pakke tyggegummi i året men alle monner drar.

Det som er viktig er at ikke dere prøver å "hjelpe til" ved å lage masse klikk eller andre dumme ting!

Bruk forumet akkurat som vanlig, ellers blir vi ganske enkelt svartelistet. For de av dere som bruker AdBlock eller lignende: Vit at jeg kjører AdBlock selv med verdens beste samvittighet. Du velger selv. Føler du for å bidra så legg inn nyttige ting i forumet. Har du både AdBlock, skrivesperre og for mye penger så har jeg lagt inn en donasjons-funksjon helt nederst på siden :-D


33
Generelt teknisk / Android Studio?
« på: 22. August 2015, 18:23 pm »
Noen som har tatt i dette? Jeg lastet ned nyeste versjon (23), installerte, opprettet et nytt prosjekt og vips -- det kommer tre feilmeldinger som gjør at det nekter å kompilere. Sjekker Google og det er et kjent problem med denne versjonen, offisiell fiks er et hack som gjør at den fortsetter å bruke versjon 22. Sikkert kjekt for de som ikke akkurat har lastet ned og installert for første gang.  ???

Oppdatering: I dag slapp de en patch som tok ca. en halv time å legge inn. Så knasket og kompilerte ting i et kvarters tid før jeg fikk enda flere feilmeldinger. Makan til makkverk er det lenge siden jeg har sett.

34
Minecraft / ATC.NO nede i fem uker - endelig oppe igjen
« på: 02. August 2015, 13:44 pm »
Neste gang jeg reiser på fem uker ferie kommer jeg nok muligens til å ta noen minutter for å forklare min overivrige samboer at det IKKE er nødvendig å nappe ut overspenningsvernet til nettlinja i tilfelle uvær, ettersom det er nettopp det overspenningsvernet er der for.

Beklager, det falt meg ikke engang inn at noen kunne være så tett i pappen, da hadde jeg fått noen til å komme innom og plugge dritten inn igjen før det hadde gått drøye fem uker.

35
Driftsmeldinger / ATC.NO nede i fem uker - endelig oppe igjen
« på: 02. August 2015, 13:43 pm »
Neste gang jeg reiser på fem uker ferie kommer jeg nok muligens til å ta noen minutter for å forklare min overivrige samboer at det IKKE er nødvendig å nappe ut overspenningsvernet til nettlinja i tilfelle uvær, ettersom det er nettopp det overspenningsvernet er der for.

Beklager, det falt meg ikke engang inn at noen kunne være så tett i pappen, da hadde jeg fått noen til å komme innom og plugge dritten inn igjen før det hadde gått drøye fem uker.




36
Minecraft / mc.atc.no - Planlagt nedetid lørdag
« på: 29. Mai 2015, 21:53 pm »
Hei,

Serveren være utilgjengelig mesteparten av lørdag 30.mai, fra morgenen til langt utpå kvelden. Årsaken er planlagt arbeid der serveren står.

37
We encountered a scenario where more than 500000 small files needed to be copied from a CIFS share running on a NetApp to a local USB disk. To make matters even more interesting, the NetApp uses on-access virus scanning. I should point out that we wanted to discard file times, ownership and permissions if possible. The USB disk was therefore formatted as exFAT rather than NTFS.

This was solved using a laptop computer with an USB3 port and a Gigabit ethernet port.

First, a plain old Ctrl+C/Ctrl+V file copy achieved a peak throughput of about 3 Mbytes/sec, which would complete the copying of 530 Gbytes in just over 2 days. TeraCopy yielded the exact same throughput. This was completely unacceptable.

Second, we downloaded RichCopy, a tool originally developed by Microsoft for internal use but later released to the public. Using 50 paralell threads and request serialization we achieved a peak throughput of nearly 8 Mbytes/sec, reducing the estimated time needed to under 23 hours.

Third, we booted a Knoppix live DVD, mounted the exFAT USB drive and CIFS share and used the following commands:
Kode: [Velg]
time find Source -noleaf -type d | xargs -n 1 -P 10 -I % mkdir -p /mnt/destination/%
Kode: [Velg]
time find Source -noleaf -type f | xargs -n 1 -P 50 -I % cp % /mnt/destination/%
Here, "Source" is the relative name of the directory containing the files to be copied.
"/mnt/destination" is the USB drive mount point.
The first command is used to build the directory structure using up to 10 paralell processes. This took about 10 minutes.
The second command is used to copy the actual files using up to 50 paralell processes. This took just over 7 hours with an average throughput of just over 20 Mbytes/sec.
Note the "-I %" which means that the character "%" should be substituted with the actual directory or file name to be created/copied.

It is possible to have 'cp' keep file times using the appropriate options, note however that this may affect the performance. You should also experiment with the number of paralell processes to find the "sweet spot" for your particular scenario. Not all USB drives, computers, network switches and CIFS servers are the same. That said, the most important factor is probably the number and size of the files to be copied.

I should also mention that the GUI on our Knoppix 7.4.2 crashed halfway through one of the tests, possibly due to a resource leak in the window manager or display driver. This did not seem to affect the performance but we decided to boot using the CLI only by typing "knoppix 2" at the bootloader prompt. As a bonus, this freed up some system resources, improving the performance even more.  :)

38
One example of code that ran perfectly well in JRE7 when compiled using JRE7:

Kode: [Velg]
ConcurrentHashMap hash =  /* contents here... */
for (Integer key : concurrenthashmap.keySet()) {
    /* Do something with the key here */
}

However, when compiled with JRE8 this code would only run on JRE8, on JRE7 it would crash with
Kode: [Velg]
java.lang.NoSuchMethodError: java.util.concurrent.ConcurrentHashMap.keySet()Ljava/util/concurrent/ConcurrentHashMap$KeySetView;
Because Java, that's why.

There is no other way around this problem except to compile using JRE7. Here's how to do that with Eclipse:

  • Download JRE7 from java.com and install it. (Don't worry, this will not affect your newer JREs.)
  • In Eclipse, click "File", "Properties", "Java Compiler".
  • Click "Configure Workspace Settings..." if you want to switch ALL projects to JRE7, otherwise check "Enable project specific settings".
  • Select compliance level 1.7
  • Near the bottom of the dialogue, click "Configure installed JREs".
  • Click the "Add" button.
  • Select "Standard VM" and click "Next".
  • Next to "JRE home", click "Directory".
  • Browse to "C:\Program Files\Java\jre7" and click "OK".
  • Click "Finish".
  • The list of installed JREs should now include "jre7". Click the checkbox next to it, then click "OK".

Eclipse should now compile your Java code to be compatible with JRE7 and complain if you are trying to use features that only exist in JRE8 (or newer).

39
Minecraft / Serveren må ned i noen timer fredag kveld
« på: 06. Mars 2015, 16:19 pm »
Hei,

Vi skal kjøre vedlikehold som bl.a. påvirker MC.ATC.NO, arbeidet er planlagt ferdig ca.21:00

40
Minecraft / SPIGOT versjon 1.8 installert!
« på: 10. Januar 2015, 19:13 pm »
Spigot er et alternativ til Bukkit som er nesten 100% kompatibel. Kun noen få plugins har protestert, disse er nå erstattet så serveren skal i grove trekk fungere som før. Det som gjenstår nå er grundig testing samt at jeg må ha beskjed om ting som må fikses.

Funnet en feil? Gi beskjed i denne tråden eller send meg en SMS hvis det er helt krise.

OBTW! Til info, bruk Minecraft version 1.8.1

41
Nyheter Software / Hardware / New servers. Again.
« på: 05. Desember 2014, 07:06 am »
Please visit the forums for full details; our old CentOS 5 servers "Hercules", "Hermes" and "Perseus" have all been replaced by a single new CentOS 7 server called "Zeus". Since we are no longer constrained in terms of RAM usage and we're seeing far less server load than before, the careful separation of critical vs. non-critical services is no longer merited. This simplified architecture should result in an overall increase in performance.

42
Driftsmeldinger / New mail/web/dns server, new VMware platform
« på: 05. Desember 2014, 07:02 am »
Our new CentOS 7 server "Zeus" has replaced the old CentOS 5 based servers "Hercules", "Hermes" and "Perseus" from early 2011.

This has been coming for some time, after we got a generous contribution in the form of a Dell R620 server with plenty of RAM to play around with.

I'm still working on a few rough edges, most notably the spam filter needs some serious attention after being pretty much neglected over the past couple of years. Please let me know if you notice anything that's broken.

Also note that I have disabled QUITE a few accounts and email addresses that showed no evidence of recent use. If I have overlooked something, just send me an SMS and I'll re-open whatever needs to be re-opened...

43
SpamAssassin logs the following message in /var/log/messages

Kode: [Velg]
Issuing rollback() due to DESTROY without explicit disconnect() of DBD::mysql::db handle bayes:localhost at /usr/share/perl5/vendor_perl/Mail/SpamAssassin/Plugin/Bayes.pm line 1516
This is because SpamAssassin for some reason explicitly sets AutoCommit to 0 and doesn't properly disconnect() its database handle before it goes out of scope. This problem was first reported in 2010(!) and instead of resulting in a patch, the discussion degenerated into bashing the Perl language.

Note that if you are using MyISAM tables in MySQL, rollbacks are not actually possible. If, however, you are using a table format which supports transaction (such as "innodb"), then this bug is effectively preventing your Bayes database from learning.

The quick fix is to enable AutoCommit, hopefully the SpamAssassin team will come up with a more proper fix before the sun collapses.

Edit the file "/usr/share/perl5/vendor_perl/Mail/SpamAssassin/BayesStore/MySQL.pm"
Change
Kode: [Velg]
'AutoCommit' => 0to
Kode: [Velg]
'AutoCommit' => 1
Save the file and restart the spamassassin service.


44
Generelt teknisk / Recompiling Apache 2.4.10 suexec for CentOS 7
« på: 01. Desember 2014, 13:12 pm »
When migrating an old Apache server of mine to CentOS 7, it broke horribly because my old web server relied on a slightly modified suexec binary which was compiled to use a different document root. This is because all of my web sites live under "/home" and not under "/var/www".

It took me a few hours to figure out, mostly because there seems to be very few good articles on the subject. Anyway, I finally got it working in a sort of hackish way.

Before we go any further, I have to assume you know the very basics of compiling software. You'll need to have installed some basic stuff like gcc, kernel headers and such or you won't get very far. This article focuses on the specifics of getting the suexec binary built. Nothing more. Seriously, you won't even get a complete Apache web server by following this advice. Just a working suexec.

Still with me? Good :-)

0. Install RPM dev tools. This should set you up with a nice set of directories under ~/rpmbuild
Kode: [Velg]
$ yum install rpmdevtools
$ cd ~/rpmbuild/SOURCES

1. Download the newest Apache source (you need the .bz2, not the .tgz)
Kode: [Velg]
$ wget http://apache.uib.no/httpd/httpd-2.4.10.tar.bz2

2. Try to build the RPM to get a list of dependencies (it will most likely fail)
Kode: [Velg]
$ rpmbuild -tb httpd-2.4.10.tar.bz2
error: Failed build dependencies:
       libselinux-devel is needed by httpd-2.4.10-1.x86_64
       libuuid-devel is needed by httpd-2.4.10-1.x86_64
       apr-devel >= 1.4.0 is needed by httpd-2.4.10-1.x86_64
       apr-util-devel >= 1.4.0 is needed by httpd-2.4.10-1.x86_64
       pcre-devel >= 5.0 is needed by httpd-2.4.10-1.x86_64
       openldap-devel is needed by httpd-2.4.10-1.x86_64
       lua-devel is needed by httpd-2.4.10-1.x86_64
       libxml2-devel is needed by httpd-2.4.10-1.x86_64
       distcache-devel is needed by httpd-2.4.10-1.x86_64
       openssl-devel is needed by httpd-2.4.10-1.x86_64

3. We don't want to have to recompile those, so let's just install the RPMs
Kode: [Velg]
$ yum install libselinux-devel libuuid-devel apr-devel apr-util-devel pcre-devel openldap-devel lua-devel libxml2-devel distcache-devel openssl-devel  

4. Notice that 'distcache' still doesn't work
Kode: [Velg]
$ rpmbuild -tb httpd-2.4.10.tar.bz2
error: Failed build dependencies:
       distcache-devel is needed by httpd-2.4.10-1.x86_64

5. Get the Fedora source RPM for that, build it and install it
Kode: [Velg]
$ wget http://www.gtlib.gatech.edu/pub/fedora.redhat/linux/releases/18/Fedora/source/SRPMS/d/distcache-1.4.5-23.src.rpm
$ rpmbuild --rebuild distcache-1.4.5-23.src.rpm
$ rpm -ivh ~/rpmbuild/RPMS/x86_64/distcache-1.4.5-23.x86_64.rpm ~/rpmbuild/RPMS/x86_64/distcache-devel-1.4.5-23.x86_64.rpm

6. Now try again to build the vanilla Apache server
Kode: [Velg]
$ rpmbuild -tb httpd-2.4.10.tar.bz2
For me, it still failed with a "file not found" but only AFTER it had successfully compiled "suexec" so it doesn't matter:
Kode: [Velg]
$ ~/rpmbuild/BUILDROOT/httpd-2.4.10-1.x86_64/usr/sbin/suexec -V
-D AP_DOC_ROOT="/var/www"
-D AP_GID_MIN=100
-D AP_HTTPD_USER="apache"
-D AP_LOG_EXEC="/var/log/httpd/suexec.log"
-D AP_SAFE_PATH="/usr/local/bin:/usr/bin:/bin"
-D AP_UID_MIN=500
-D AP_USERDIR_SUFFIX="public_html"

7. Delete the "suexec" binary that we just compiled. Trust me.
Kode: [Velg]
rm ~/rpmbuild/BUILDROOT/httpd-2.4.10-1.x86_64/usr/sbin/suexec

8. Now edit the file "~/rpmbuild/SPECS/httpd.spec" which controls exactly how the source gets compiled.
We need to make two changes:
   1) At the very beginning of the file, add the following line:
Kode: [Velg]
%define homedir /home
   2) Locate the following line:
Kode: [Velg]
--with-suexec-docroot=%{contentdir} \
   replace with
Kode: [Velg]
--with-suexec-docroot=%{homedir} \

9. Now build the (slightly) modified Apache server (notice the different rpmbuild command)
Kode: [Velg]
$ rpmbuild -bb ~/rpmbuild/SPECS/httpd.spec

10. It failed the last time around so expect it to fail this time too. We're still only interested in the "suexec" binary though, so let's check it out:
Kode: [Velg]
$ ~/rpmbuild/BUILDROOT/httpd-2.4.10-1.x86_64/usr/sbin/suexec -V
-D AP_DOC_ROOT="/home"
-D AP_GID_MIN=100
-D AP_HTTPD_USER="apache"
-D AP_LOG_EXEC="/var/log/httpd/suexec.log"
-D AP_SAFE_PATH="/usr/local/bin:/usr/bin:/bin"
-D AP_UID_MIN=500
-D AP_USERDIR_SUFFIX="public_html"
   
Notice anything else you'd like to change while you're at it? Go back and edit the .spec file, now that you know how. I wouldn't go changing anything in the actual source code though, because unless you're an expert and know EXACTLY what you're doing, you will most likely introduce a security vulnerability. Seriously, don't.

11. Copy the fresh binary into your web server and you should be all set to restart Apache:
Kode: [Velg]
$ cp ~/rpmbuild/BUILDROOT/httpd-2.4.10-1.x86_64/usr/sbin/suexec /usr/sbin/suexec
cp: overwrite ‘/usr/sbin/suexec’? y
        $ systemctl restart httpd

12. Now, there's a couple of more things you should consider.
Because the suexec that ships with CentOS is "improved" to use syslog instead of a plain old log file and the version we just compiled uses a plain old logfile, you need to create that log file and make it writable for the 'apache' user, or these error messages will appear in your error_log file:
Kode: [Velg]
[Mon Dec 01 12:38:50.919555 2014] [cgi:error] [pid 17535] [client 10.1.1.53:55449] AH01215: suexec failure: could not open log file, [...]
[Mon Dec 01 12:38:50.919615 2014] [cgi:error] [pid 17535] [client 10.1.1.53:55449] AH01215: fopen: Permission denied, [...]

Kode: [Velg]
       $ chmod 755 /var/log/httpd
        $ touch /var/log/httpd/suexec.log
        $ chown apache:apache /var/log/httpd/suexec.log
        $ chmod u+s /usr/sbin/suexec

You will also want to edit the file "/etc/logrotate.d/httpd" and insert the keyword "create":
Kode: [Velg]
/var/log/httpd/*log {
    missingok
    notifempty
    sharedscripts
    delaycompress
    create
    postrotate
        /bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true
    endscript
}
This will ensure that when logrotate periodically removes the log file, a new (empty) log file is created in its place with the correct permissions/ownership.

If this worked for you, please consider letting me know. If it didn't, then please try to figure out why and then let me know :-)

45
Minecraft / ADSL-linja nede. Igjen.
« på: 06. August 2014, 19:49 pm »
Som enkelte kanskje har merket så har både forumet og serveren vært utilgjengelig siden mandag ettermiddag. Årsaken er at lynet slo ned og drepte ADSL-modemet mitt for N'te gang. Jeg har nå fått opp en reserveløsning basert på 3G, det går fryktelig sakte men er bedre enn ingenting.

Sider: 1 2 [3] 4 5 ... 12