AWS Opsworks review one year on

For the past year I’ve been using opsworks on AWS. I’m not sure who this product is aimed at but clearly people who are experts in Chef automation and or have deep knowledge of AWS from querying support, information not available on their technical docs. When I initially set up Opsworks it had recently been moved onto using a newer version of Chef (12), there’s very little documentation around there example github repo and cookbooks are pretty old https://github.com/aws/opsworks-cookbooks

And I could be wrong but I think these didn’t work either, the major caveat is that their online interface doesn’t seem to pick up on any changes you make outside of Opsworks. Which might be acceptable however even changing things inside the Opsworks GUI does not work sometimes. Try changing the name of the machine (this is used as the hostname) it picks this up from the machine itself. You then change the machine’s hostname, but it still reverts back to what’s set in Opsworks. Other anomalies include it showing the wrong IP address when an elastic IP is assigned.

Here’s a snippet from a support query:

When we use managed services, updates, deletes, and other modifications to the infrastructure need to be done within the management console for that service. When we make changes to resources created by something like OpsWorks outside of the OpsWorks console or AWS cli, weird behavior can occur. This is because agents controlling those resources cannot track the changes made to those resources if they are made outside of their jurisdiction. These changes are called “out of band changes.” The above scenario would be considered an out of band change, and OpsWorks is reacting to changes to its resources in an expected manner for this situation. Right now, because of the series of events surrounding the use of that EIP, OpsWorks views that EIP as unassigned to any instance in the stack, hence the conflicting information about the public IP for that instance.

Another feature of Opsworks is the auto-healing aspect. Server failure where there is a hardware fault can be automatically handled by Opsworks bringing the instance back up. After a colleague had a long chat with AWS support this isn’t supported unless you configure you instances in a specific way. Even when it does work it can get in a stop_failed status. Which effectively means it didn’t work.

The only useable aspect of Opsworks which is the only reason we are using it is for the integrated user management with AWS iam policies, which is great for SSH key management. Past that we haven’t found it reliable enough or easy to use any of the other claimed features.

 

 

Using logrotate with your Laravel projects

Hard disk filled up by logs? Often overlooked until it becomes an issue. It can become a cycle of death for a server, application errors occur > write to log file > disk space full.

There’s a simple way to overcome that, and prevent the logs growing to an un-manageable size.

Laravel 4 and 5 do support some log configuration, for example to set it to daily. But this isn’t a concrete prevention of the logs getting out of hand, for that you need the logrotate utility.

Not that this guide is laravel specific you could follow this guide on any log file your keeping.

logrotate configuration files are stored in /etc/logrotate.d (Ubuntu, though likely to be the same on most linux environments)

add your file like so:

nano /etc/logrotate.d/laravel

and it’s contents:

/var/www/project/app/storage/logs/laravel.log { 
rotate 14 
daily 
compress
maxage 14 
}

Options explained

rotate – How many files are kept before deleting old files
daily – How often to rotate the files, (this could be hourly*, weekly, daily, monthly)
compress – Yep you will want this as text files compress well
maxage – Files older than 14 days will be deleted

*hourly needs the cron job updating to run hourly, best practice would be changing it to run every 5 minutes, to allow all your configuration to run whenever it’s required.

*/5 * * * * /etc/cron.daily/logrotate

So in the above example we are having a maximum of 14 log files, compressed, rotated daily and kept for 14 days. In our log directory we should have the following files:

laravel.log
laravel.log.1.gz
laravel.log.2.gz
laravel.log.3.gz
laravel.log.4.gz
laravel.log.5.gz
etc..

There’s also some neat tricks you can do with logrotate like finding all log files, if for example we had more than one log file in the same directory.

/var/www/project/app/storage/logs/*.log { 
rotate 14 
daily 
compress
maxage 14 
}

Or you could specify multiple files

/var/www/project/app/storage/logs/laravel.log /var/www/project/app/storage/logs/debug.log { 
rotate 14 
daily 
compress
maxage 14 
}

You can also run other tasks when the log is rotated

/var/www/project/app/storage/logs/*.log { 
rotate 14 
daily 
compress
maxage 14 
postrotate
    /usr/sbin/apachectl restart > /dev/null
endscript
}

Furthermore you can also restrict the size of the files by using minsize,size and maxsize specifiying a unit in M GB or K so a production example might look like the following:

/var/www/project/app/storage/logs/*.log { 
rotate 7  
daily
compress
maxage 7 
maxsize 10M
}

We now know that the logs will take up 70MB of disk space as a maximum, it’s important to note that if you do use the any of the size options they work irrespective of the rotation frequency, so as above even if the log is set to rotate daily it will be overridden if the file grew to 10MB it would be rotated immediately.

That would also be dependent on having the cron job set up at a higher frequency as mentioned above.

 

Removing large amount of files (argument list too long)

3 useful commands when you have a large amount of files to delete in case you’ve reached the inode limit or just want to remove them.

view inode usage

df -i

how many files?

ls -U1 | wc -l

Remove a folder containing these files

rm -fr folder/* or pattern like rm -fr folder/sess_*

If it’s really so large you can’t remove it with the above command a quicker way is to use find, (all files over 80 days old)

find /really/large/folder -name 'file_pattern_*' -type f -mtime +80 -exec rm {} \;

Or even quicker is to use rsync to empty the directory.

rsync -a --delete empty_dir/ yourdirectory/

Enhancing Magento Security – Best practices

After watching the recent panorama documentary on the recent TalkTalk hack, it made me wonder how vulnerable many of the Magento sites where and what can be done to tighten Magento’s security.

Many opensource platforms are often hacked for example WordPress is a regular culprit , though it’s not as widespread that the core code has vulnerabilities it’s usually a rogue plugin or one that hasn’t been updated, with WordPress’s built in automatic updating it has a bit more protection.

Magento doesn’t have such frequent easy updating but recently the Magento team have released a flurry of patches, in what seems to be either that they have woken up to the fact that Magento in it’s previous incarnation had many exploits or that retailers have fed back to Magento. Whatever the case like any opensource platform continual updates are required especially in an ecommerce environment.

Magento and other ecommerce systems will be a prime target for hackers, thousands of un-encrypted customer details and in most cases it’s in-practical to encrypt them so your left with the only option of tightening up server security and the application itself.

So what steps can you take to secure Magento?

Patch/Upgrade

Install the most up to date patches
http://magento.com/security-patch

As of writing there are around 15 patches for community edition, depending on which version you have installed.

https://www.magentocommerce.com/download

Alternatively upgrade Magento to the most current release, as it includes all current patches 1.9.2.2

Upgrade to the next version 2.0
Probably not viable for most retailers just yet

PCI Compliance

Any ecommerce should be PCI compliant even if your taking payments offsite it’s worth having a PCI scanner in place. For that I would recommend using Trustwave some PayPal integrations require it anyways.

This will scan your site for common exploits, you might be surprised by what it picks up.

Trustwave

https://www.trustwave.com

Strong SSL

Ensure your using a strong cipher suite and your webserver is set up to use the correct protocols.

SSL Rating

https://www.ssllabs.com/ssltest/

Run your site through SSL Labs for any recommendations, A or above rating is recommended.

Change the admin path

A simple yet effective change to move the /admin to somewhere else, edit your config xml including the following.

<admin>
<routers>
<adminhtml>
<args>
<frontName><![CDATA[admin]]></frontName>
</args>
</adminhtml>
</routers>
</admin>

If this isn’t an option as it can break some plugins, then request your server admin add (if not already installed) a rule to fail2ban to prevent attacks on the /admin folder.

Prevent access

To magento’s directories, see a complete list in the bottom example.

directly in the vhost (Apache):

LocationMatch ^/(app/|var/) >
    Require all denied

Nginx:

location ^~ /app/ { deny all; }
location ^~ /var/ { deny all; }

Alternatively you could return a 404

location /app/                { return 404; }
location /downloader/         { return 404; }
location /errors/             { return 404; }
location /media/              { return 404; }
location /assets/             { return 404; }
location /images/             { return 404; }
location /skin/               { return 404; }
location /includes/           { return 404; }
location /lib/                { return 404; }
location /media/downloadable/ { return 404; }
location /pkginfo/            { return 404; }
location /report/config.xml   { return 404; }
location /shell/              { return 404; }
location /var/                { return 404; }

Also make sure to include any external import/export tools e.g Magmi

Turn on SSL for the admin

secure

Ideally you should have your entire site as SSL, this can affect performance on lower end servers.

Use strong passwords

Simple but obvious, this is probably the single biggest threat. Make it easy to remember but hard to guess, a password generator might not be useful here unless you use a password manager. Ideally use two-factor authentication

Two factor auth

Through this extension: http://www.xtento.com/magento-extensions/two-factor-authentication-enhanced-admin-security.html

Restrict admin by IP

Ideally this would be better sitting in the apache/nginx config, but if you don’t have that kind of access or want GUI control there is a free module.

http://www.magentocommerce.com/magento-connect/et-ip-security.html

Use a more granular admin permission module

This allows you to control in a more detailed way what each admin user can do.

http://www.aitoc.com/en/magentomods_advanced_permissions.html

Even if it’s just you managing your site, you could have a master login and editor role, to limit the use of full access.

Advanced server side

More advanced server side implementations should include:

Install fail2ban

As mentioned above you can set it up to restrict the login attempts on /admin but more globally attacks on SSH and other services.

Use of hard/soft firewall

Any server should have a software firewall in place locking down ports that should not be open, ideally a hardware firewall gives more concrete protection.

Correct permissions on folders

Configuring which folders have read/write access ideally all should be read-only accept where required.

Anti virus installation

ClamAv or any other virus install a personal favourite on windows is ESET, it’s a worthy note you should have this on your personal/work machine’s too.

Code monitors

Use of products like CodeGaurd that alert you of code changes, there’s also sucuri, this would help spot simple iframe injections or other code injection attacks where the site is kept running with an unobtrusive line of malware injected into your index.php files or other index files.

DNS level protection

Use CloudFlare for DNS level blocking and DDoS protection.

Use a load balancer

Whilst not essential this can help prevent DDoS attacks and hide your main server IP, it also gives you lots of flexibility in changing/upgrading your server.

Change SSH port

This does cut out a lot of the attacks, there’s arguments for and against this as a hacker could easily find the port, but this will prevent the thousands of automated attacks.

Disable non secure FTP

It’s not enabled by default on most linux distro’s but if you have something like cPanel installed it may be.

Jail SFTP users & Jail Apache/Nginx

Effective in locking down what users can access, and possibly preventing further access to other parts of the system.

Remove .htaccess / disable AllowOverride and put the configuration directly in the vhost file. (Apache Only)

Moving the config further up the chain makes it harder for hackers to change the site configuration in combination with the jail it would be difficult for someone to make a change to the apache configuration for the site.

Disable Postfix and other mail services and use Mailgun/Mandrill instead

Recently a server I look after was compromised and was sending out spam emails via postfix. It’s best to disable these services unless you understand properly how to secure them. Mailgun and Mandrill both have limits in place that would prevent this kind of attack.

 

Developer Shortcut – Bash script menu to connect to servers via SSH

Managing various servers can get quite repetitive entering various IP address’s or even remembering them in the first place! Lets solve that with a simple bash script depending on wether your doing this from a server or on your local environment the location of your bash_profile may be different but the steps are the same.

Your profile is usually in /home/users/yourname/.bash_profile

nano /home/users/yourname/.bash_profile

Open up this file and add a new line at the bottom

alias connect='bash /Users/yourname/menu.sh'

What this is doing is saying when I type connect in the terminal run menu.sh, you could change this word to another word of your choice.

Change the location of menu.sh to suit your needs, save and close.

Make a new file called /Users/yourname/menu.sh and inside put:

#!/bin/bash
# Bash SSH Menu

PS3='Please enter your choice: '
options=("mysql" "web"  "Quit")
select opt in "${options[@]}"
do
    case $opt in
        "web")
            echo "Connecting to web 10.0.0.1"
            ssh [email protected]
            break
            ;;
        "mysql")
            echo "Connecting to mysql 10.0.0.2"
            ssh [email protected]
            break
            ;;
"Quit")
            break
            ;;
        *) echo invalid option;;
    esac
done

Here we are setting up the menu options in the first line, in this case mysql and web which refer to my web and mysql server but these could be named whatever you like, the next part is similar to a switch statement in PHP, for each case print that we are connecting and then SSH to the IP. Finally a quit option and a default option to handle invalid requests.

If you use this in conjunction with SSH keys it will save a lot of time.

Speed up Magento without Varnish Cache – The Alternatives

More often than not you should probably stay clear of implementing varnish in Magento until you’ve exhausted all other options. It seems like the Holy Grail in terms of performance and will literally make your site fly, the problem is Magento is a complicated beast and any extension you use that claims a quick integration with Magento couldn’t be further from the truth! (unless of course your using a stock site, unmodified.) It’s likely you will need a ton of customisation and debugging to any of the Varnish plugins you can buy off the shelf. Not only that you will need to set up additional servers, learn how to debug varnish and it’s various admin tools and generally spend a lot of time getting it to work properly.

Turpentine offers the most reliable product from the one’s I tested but with limited documentation and help your going to struggle to get it set up correctly. The official Varnish Pagecache extension from the makers of Varnish isn’t much better, with a number of unfixed bugs and generally poor documentation, but before getting into all that you should really consider your options on improving the site speed without the use of Varnish.

Firstly set a benchmark for how your site is now, go over to http://tools.pingdom.com/fpt/ and record the speed of the homepage, category pages etc as you make each change you can run the tests again to track your progress

Optimisation for any platform

  • Tune .htaccess for speed , Creare have an excellent blueprint here
  • Use a CDN for images Pica CDN works with Amazon, Rackspace and many others
  • Install SOLR search free for CE edition alternatively this plugin works well and offers a more comprehensive set of features (be prepared for some debugging depending on your site) both these plugins require tomcat server to be installed
  • Use a CDN for content delivery e.g CloudFlare
  • Remove redundant code , php comments, html comments anything that adds to the page load however small
  • Minify your media files (JS,CSS etc) there’s a few extensions for this most widely used is the fooman speedster
  • Remove any unused styles or JS
  • Organise your main theme assets into CSS sprites
  • Reduce the quality of the images (save your images in the smallest file size possible) ideally combine the main images into a sprite
  • Use the cloud for your DNS, this can shave off an extra 50MS+ depending on your current provider, CloudFlare and other providers offer this service.
  • Use DNS pre-fetching , resolve the IP address for your assets before you use them. Something for modern browsers – Find out more in this guide from Mozilla
  • Use HTML5 browser caching through an app.manifest there’s an excellent guide on html5rocks
  • Move blocking JS to the footer, Google’s pagespeed insights offers a great tool for this (you can only do this for fonts, and external JS libaries. Don’t try moving Magento’s Core JS to the footer). Some libraries need to be in the header.

Client Side

A modern approach

Most browsers now support HTML5’s new app cache,  a cache on the users browser but browser already do that right? Yes they do – However AppCache works differently in that it’s designed for Apps where the user isn’t always online and in doing so when a cache is created on the clients side and when a page is loaded, cached resources are loaded directly from the cache, no connections to the live site, no checking if there’s a newer version –  a big speed increase.

However there’s some downsides to using this approach on how the cache expires and clearing it,  rather than re-iterate a very good explanation can be found on http://alistapart.com/article/application-cache-is-a-douchebag. I’m working on a module for Magento the makes use of AppCache if you have any thoughts please leave a comment.

Server Side

Tune Apache

Apache users can achieve quite a performance boost from tweaking the configuration:

  • Put .htaccess rules directly in your vhost conf and turn off htaccess – Magento has hundreds of directories and files each time a call is made to a file apache has to recursively loop through all these to check for .htaccess files. It might not sound like much but it can make a difference. Not only this you will be securing your server from exploits.
<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>

You can find more about some of the settings here http://httpd.apache.org/docs/2.2/misc/perf-tuning.html

  • Check the apache config,  are the settings correct for your server spec? There’s numerous guide on this so I won’t detail this here, you could also try adjusting settings and running the benchmark several times to see what works best, in normal load and under stress.
  • There’s 25 tips here on tweaking apache

Use Nginx + PHP-FPM

Nginx uses much less resources as it works a bit differently from Apache, if your already familiar with Apache and don’t have the time to learn how to use Nginx it’s probably best to take it as far as you can with Apache before considering this option, if you already use it then it works pretty well out of the box. What you might need though is some extra configuration settings there’s a guide here on that.

Make sure you install php-fpm if your running nginx.

apt-get install php5-fpm

In most cases the above configuration can be left as is in nginx, if your using port 9000, you will want to edit the php-fpm configuration usually www.conf found in:

/etc/php5/fpm/pool.d/

Change the user and group to match that of your nginx installation as default this is www-data

; Unix user/group of processes
; Note: The user is mandatory. If the group is not set, the default user's group
;       will be used.
user = www-data
group = www-data

There are three different ways of running php-fpm and that is ondemand,dynamic or static I won’t go into the detail of these, but i’ve found ondemand to work better for Magento which basically runs as many process’s as is required up to a maximum rather than fixing a certain amount to be running all the time. It’s worth noting here you can enable a status page that gives you some information about what’s going on.

pm.status_path = /status

In your php-fpm uncomment this line, you can then visit website.com/status to get an output, there’s a great tutorial on using php-fpm status page here.

Tweak Mysql

There’s many guides out there on mysql and I make no claims to be an expert these settings have helped

skip-name-resolve
log_slow_queries       = /var/log/mysql/mysql-slow.log

If your mysql is listening locally e.g bind-address in my.cnf says localhost or 127.0.0.1 etc then you don’t need to resolve the name , this will avoid any delay from the DNS.

The rest of your config should be configured as per the report from mysqltuner, if that’s new to you check out this post on how to use it.

Redis

I have recently come across Redis It’s an extremely fast cache storage engine and works seamlessly with Magento, it does require some server configuration but I would say this is one of the easiest to configure and get up and running quickly. There’s also support for sessions via Redis Sessions have not tried this yet but also looks very solid as of CE 1.8 Redis comes part of the default install so it’s as simple as configuring it via the local.xml if you want to find out more there’s some benchmark information here and a guide to using Magento with Redis on the Magento Site.

HHVM

Review coming soon on this.

APC caching

Install it onto your server

sudo apt-get install php-apc
sudo service apache2 restart

Whilst it’s installing make a note of the version, we will need this later to tweak the settings e.g

Get:1 http://uk.archive.ubuntu.com/ubuntu/ precise/universe php-apc i386 3.1.7-1 [79.2 kB]

Enable it via/magento/app/etc/local.xml and add the following lines (note if you have more than one Magento Install on the same server make sure the prefix is unique for each one)

<global>
        ...
        <cache>
            <backend>apc</backend>
            <prefix>alphanumeric</prefix>
        </cache>
        ...
</global>

Tweaking APC couldn’t be easier, first check what version is installed (as noted earlier) download and browse the archive which matches on http://pecl.php.net/package/APC

Inside there will be a file called apc.php , you need to put this somewhere that is served by Apache or Nginx ideally on a password protected area of your site.

What you want to achieve here is a high Hit ratio and little fragmentation, usually this happens because there is not enough memory allocated to APC. You can alter this setting by changing the config file in /etc/php5/conf.d/apc.ini or wherever your php install is located until you achieve the desired result.

it should look something like this:

extension=apc.so
apc.enabled=1
apc.shm_size=500M
apc.max_file_size=3M
apc.enable_cli=1

shm_size
The maximum amount of memory APC can use, one it runs out it has to purge cached items which leads to fragmentation.

max_file_size
The maximum file size that can be cached by APC, this defaults to a low value and I would recommend changing this to 3M or 5M.

Memcache

There’s a decent guide here on installing Memcache once you’ve done that follow this guide on enabling it in Magento

Single split servers

This is generally a good idea for failover, and that’s to have a separate MySQL and Web Server, this takes the load off one server and allows you to upscale each individually. Ideally if you have enough traffic also separate out the SOLR instance. Splitting the connection between mysql and your web server can have negative effect though depending on the connection between each you will need a gigabit connection to remove network latency so if you haven’t got much load coming from apache/nginx then it’s probably not worth it.

Google PageSpeed

This can be installed server side on Apache as a module or Nginx (Nginx requires a re-build) , this allows you to do a lot of optimisation on the fly like removing whitespace, minifying JS/CSS and even optimising images. more about Google PageSpeed

App code

Non-Varnish Caching (build your own)

Magento’s built in cache

There’s a few ways to add caching that don’t involve varnish, first is that Magento comes with comprehensive caching out of the box.

There’s really only four methods available to us.

save($value, $key, $tags = array(), $lifeTime=null)
load($key)
remove($key)
clean($tags = array()

So let’s add something to the cache in this example we are retrieving the lowest price from a grouped product, it’s pretty intensive as it has to loop through each simple product to retrieve it’s price. The prices don’t change very often so we don’t have to do this everytime!

// load the cache
$cache = Mage::app()->getCache();

// The Code we are caching
if(!$cache->load($_product->getId())) {

// The cache doesn't exist
$aProductIds = $_product->getTypeInstance()->getChildrenIds($_product->getId());
                                    $prices = array();
                                    foreach ($aProductIds as $ids) {
                                        foreach ($ids as $id) {

                                            $aProduct = Mage::getModel('catalog/product')->load($id);
                                            if($aProduct->isSaleable()) {
                                            $prices[] = $aProduct->getPriceModel()->getPrice($aProduct);
                                            }
                                        }
                                    }
                                    asort($prices);
                                    $prices = array_shift($prices);
                                    $grouped_price = $helper->currency($prices,true,false);
// save
$cache->save($grouped_price, $_product->getId(), array("grouped_prices"), 3600);
} else {

// load the saved price
$grouped_price = $cache->load($_product->getId());

}

echo $grouped_price;

So in the above block we are:

  1. First checking if the the cache named “id of product” exists
  2. If it returns false we then run the code to save the price to the cache with a lifetime of 1 hour (3600 seconds) and with a tag of array(“grouped_prices”)
  3. If the cache doesn’t exist we run the code as usual
  4. In all scenarios $grouped_price is returned with the price

If you don’t set a lifetime value then the item would be cached until it’s removed manually, to remove this value earlier than 1 hour we would do so by using remove

$cache->remove($_product->getId());

We can also remove by the tag if for instance you wanted to clear the cache for all grouped product prices you can use clean.

$cache->clean(array("grouped_prices"));

Full Page caching and others

Unicache Inchoo

This builds on the default caching system but really only offers convenience and an admin section allowing you to clear the cache for individual items. check it out here

Lesti FPC

Gordon Lesti wrote his own FPC for Magento, it’s easy to install follow the guide on using it, although this suffers from the same problems as using Varnish if you have custom blocks you will need to configure them for the site to work properly. However it does not require additional servers or software so it cuts a lot of set up time.

Do you know of any other ways to speed up Magento?, get in touch.

Quafzi Performance Tweaks

Recently came across this module, that offers a lot of optimisations based on recommendations from Ecommerce devs

https://github.com/quafzi/magento-performance-tweaks

One thing I had to disable on this particular module is the CMS block caching if for example like me you are using it to load a template that changes for each product category. It’s well commented so it’s easy to see the particular changes that might affect your site.

Forwarding the User IP from a Rackspace Cloud Load Balancer

If you have a setup which includes one of the rackspace cloud load balancers you will notice that in apache or php the ip of the client is the load balancers, the easiest way to fix this is install an apache module called mod_rpaf.

Here’s how under Ubuntu & Apache 2.2:

Install the apache dev tools (may not be required)

Check which version of apache your running

apache2 -l

The output here tells me i’m using the perfork module.

Compiled in modules:
  core.c
  mod_log_config.c
  mod_logio.c
  prefork.c
  http_core.c
  mod_so.c

Install the prefork development tools

apt-get install apache2-prefork-dev

The alternative here is to install apache2-threaded-dev if you see threaded instead of prefork.c

 

Install the mod_rpaf 

Download the mod_rpaf module and extract the latest version to check the latest see http://stderr.net/apache/rpaf/download/

wget http://stderr.net/apache/rpaf/download/mod_rpaf-0.6.tar.gz
tar -xzf mod_rpaf-0.6.tar.gz

Install and compile the apache module

cd apache-2.2-mod_remoteip.c
apxs2 -i -c -n mod_rpaf-2.0.so mod_rpaf-2.0.c

The script will compile and install the module giving you the output of the path to the module e.g

/usr/lib/apache2/modules/mod_rpaf-2.0.so

Configure

Make a new config file for rpaf and edit

touch /etc/apache2/conf.d/rpaf.conf
nano /etc/apache2/conf.d/rpaf.conf

Enter the following details in rpaf.conf, change 10.000.000.0 to your load balancer IP comma seperate for more than one

# Your module path
LoadModule rpaf_module /usr/lib/apache2/modules/mod_rpaf-2.0.so


RPAFenable On
RPAFsethostname On
# Your loadbalancer IP seperate with a space for more than one
RPAFproxy_ips 10.000.000.0
RPAFheader X-CLUSTER-CLIENT-IP

 Check the apache configuration

service apache2 reload

If you see any errors here check your configuration.

 Restart apache and apply the changes

service apache2 restart

You should then see the correct user ip showing up in $_SERVER[‘REMOTE_ADDR’] for php

 Update:

To get the correct IP from the Load Balancer to .htaccess

If your looking to restrict access from .htaccess this still doesn’t give apache the correct IP to do that you simply use a pre configured environment variable:

order deny,allow
deny from all
allow from env=allowclient
SetEnvIf X-Cluster-Client-Ip 000.000.000.000 allowclient

Clean up linux boot partition

If your boot partition is getting full and your running the machine as a server and have no GUI you can run this command to remove all those unecessary kernel images if you update frequently they can take up quite a bit of space.

dpkg --get-selections|grep 'linux-image*'|awk '{print $1}'|egrep -v "linux-image-$(uname -r)|linux-image-generic" |while read n;do apt-get -y remove $n;done

You may also need to update GRUB  /usr/sbin/update-grub after this has completed.