Magento 2 checkout customisation

I recently did some extensive layout/styling to the checkout in Magento 2 EE this included adding additional steps, removing steps and customising the experience into collapsible sections, plenty of Knockout, JS and template changes along with some Local Storage state management.

Out of the box it’s a two step checkout, but breaking it down into sections can offer a better UI experience.

The checkout and minicart are some of the most complicated aspects of Magento 2’s frontend especially if you have 3rd party integrations.

Get in touch if your looking for development of the Magento2 checkout experience.

Magento2 breadcrumbs on a product page

By default Magento2 does not show the product page breadcrumb.

Refer to this answer by nuwas for a decent breadcrumb implementation.

https://magento.stackexchange.com/questions/225370/magento-2-2-4-breadcrumbs-do-not-show-on-product-pages-when-default-navigation

If there is multiple categories or sub categories, it may not order the navigation correctly,  fix that by ordering by level, it could be improved further by also ordering by position.

    $cats = [];
                foreach ($breadcrumbCategories as $category) {
                    $cats[]=['level' => $category->getData('level'),
                        'position' => $category->getData('position'),
                        'name' => $category->getName(),
                        'url' => $category->geturl(),
                        'id' => $category->getId()];
                }
                usort($cats, function ($item1, $item2) {
                    return $item1['level'] <=> $item2['level'];
                });

                foreach ($cats as $category) {
                    $catbreadcrumb = array("label" => $category['name'], "link" => $category['url']);
                    $breadcrumbsBlock->addCrumb("category" . $category['id'], $catbreadcrumb);
                    $title[] = $category['name'];
                }

Using WordPress as a CMS for Magento2

There’s already integrations out there like FishPig that integrate WordPress into Magento2 however what I’m talking about here is a more tightly integrated website where WordPress serves all the content pages and Magento the product pages.

After all one of Magento’s weaknesses is it’s CMS editing tools so this makes it a perfect match for any project.

Project criteria:

  • Both WordPress and Magento2 Must support multi language via the URL
  • Magento2 to live in /products
  • WordPress to live in the root /
  • WordPress to use WMPL and support multi language via the URL

I also wanted WordPress to not physically live in the root folder but it’s own /wp folder. For a clean seperation of products so my directory structure looked like this:

  • /wp
  • /products
  • index.php
  • .htaccess

To get the languages to work the only workable solution I could come to was to create directories for each language in the root.

  • /en/
  • /fr/
  • /fr/index.php
  • /fr/products (symlink)

Inside these folders is an index.php and a symlink to the /products folder.

The actual root .htaccess looks like this:

RewriteEngine on
RewriteCond %{HTTP_HOST} ^(www.)?mywebsite.com$
RewriteCond %{REQUEST_URI} !^/wp/
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ /wp/$1
RewriteCond %{HTTP_HOST} ^(www.)?mywebsite.com$
RewriteRule ^(/)?$ wp/index.php [L]

This works perfectly because it’s bypassed for physical files and folders. Which allows us to reach Magento.

You can then set up a global header/footer which can sit in WordPress.

In our case we have a mainly static header/footer but you could easily use Magento/WP inside these by having a seperate header/footer that includes this file and setting up variables that can be used in this file.

For example the header.phtml in Magento

$magentoLocale=$this->getLocaleCode();
include_once ('/var/www/mywebsite/wp/wp-content/themes/mytheme/global-header.php');

The only downside to the whole setup is having to use long relative paths or absolute paths as above.

$localeUrl = '/';
if (isset($wp_version)) {
    //Language code from WMPL
    if ( defined( 'ICL_LANGUAGE_CODE' ) ) {
        $localeUrl .= ICL_LANGUAGE_CODE;
    } else {
        $localeUrl .= 'en';
    }
} else {
    //Language code from Magento
    $localeUrl .= $this->getLocaleCode();
}

In the above we have a simple $localeUrl that we can use before any links.

I would highly recommend a setup like this over just using Magento as content driven pages can then be just that, without the need to buy expensive 3rd party CMS extensions for Magento.

 

Resetting your Ionic project when all else fails

Sometime you get un-expected errors, missing plugins etc. A simple way to reset everything to afresh is this:

Delete the app’s

node_modules/​

Delete the app’s package-lock file (if present)

package-lock.json​

Delete the app’s

platforms/

Delete the app’s

plugins/

Run

npm install​

(which will regenerate what you deleted above)

Run

npm run build

(try with and without the –prod flag)

Run

ionic cordova platform add android

Run

ionic cordova platform add ios

How to get a persistent unique device ID in Ionic for iOS

Simple way to store uuid as device.uuid. Using the Native cordova device plugin gives inconsistent results in iOS. Not only this it changes every time the app is installed. Might be a rare case for some but in the app i’m building we use it to lockdown which devices can use the app irrespective of login credentials.

I used this in conjunction with the hashids library. And you will need to follow the instructions to include the KeyChain plugin and ionic library.

validateUuid() {
    let date = new Date();
    let components = [
        date.getFullYear(),
        date.getMonth(),
        date.getDate(),
        date.getHours(),
        date.getMinutes(),
        date.getSeconds(),
        date.getMilliseconds()
    ];

    let id = components.join("");
    let hashids = new Hashids();
    let newKey = hashids.encode(id).toUpperCase();

        return new Promise((resolve,reject) => {
            if(window.cordova) {
                this.keychain.get('APP_NAMESPACE_KEY').then((uuid) => {
                    if (!uuid) {
                          this.keychain.set('APP_NAMESPACE_KEY', newKey, false).then(() => {
                              this.appService.set('deviceId',newKey);
                              return resolve();
                          }).catch((err) => {
                              return resolve();
                          });

                      } else {
                          this.appService.set('deviceId', val);
                          return resolve();
                      }
                }).catch(() => {
                    //something went wrong
                    return resolve();

                });
            } else {
                this.appService.set('deviceId','browser-id');
                return resolve();
            }
        }).then(() => {
            if(this.platform.is('android')) {
                this.appService.set('deviceId',window.device.uuid);
            }
        });
}

Here our validateUuid function returns a promise, first we generate a new hash in case we need it later, then first check if the keychain value exists (APP_NAMESPACE_KEY). If it does return that value, or create it.

We also do a check if running in the browser by checking for window.cordova or not, if not returning a default “browser-id”.

In my app I have an appService that sets/gets the value of the deviceId which is used elsewhere around the app.

Finally if running on Android then simply set it to window.device.uuid as Android does not currently change this each time.

 

AWS Opsworks review one year on

For the past year I’ve been using opsworks on AWS. I’m not sure who this product is aimed at but clearly people who are experts in Chef automation and or have deep knowledge of AWS from querying support, information not available on their technical docs. When I initially set up Opsworks it had recently been moved onto using a newer version of Chef (12), there’s very little documentation around there example github repo and cookbooks are pretty old https://github.com/aws/opsworks-cookbooks

And I could be wrong but I think these didn’t work either, the major caveat is that their online interface doesn’t seem to pick up on any changes you make outside of Opsworks. Which might be acceptable however even changing things inside the Opsworks GUI does not work sometimes. Try changing the name of the machine (this is used as the hostname) it picks this up from the machine itself. You then change the machine’s hostname, but it still reverts back to what’s set in Opsworks. Other anomalies include it showing the wrong IP address when an elastic IP is assigned.

Here’s a snippet from a support query:

When we use managed services, updates, deletes, and other modifications to the infrastructure need to be done within the management console for that service. When we make changes to resources created by something like OpsWorks outside of the OpsWorks console or AWS cli, weird behavior can occur. This is because agents controlling those resources cannot track the changes made to those resources if they are made outside of their jurisdiction. These changes are called “out of band changes.” The above scenario would be considered an out of band change, and OpsWorks is reacting to changes to its resources in an expected manner for this situation. Right now, because of the series of events surrounding the use of that EIP, OpsWorks views that EIP as unassigned to any instance in the stack, hence the conflicting information about the public IP for that instance.

Another feature of Opsworks is the auto-healing aspect. Server failure where there is a hardware fault can be automatically handled by Opsworks bringing the instance back up. After a colleague had a long chat with AWS support this isn’t supported unless you configure you instances in a specific way. Even when it does work it can get in a stop_failed status. Which effectively means it didn’t work.

The only useable aspect of Opsworks which is the only reason we are using it is for the integrated user management with AWS iam policies, which is great for SSH key management. Past that we haven’t found it reliable enough or easy to use any of the other claimed features.

 

 

Fully css carousel cross browser?

I was asked to see if I could develop a carousel using nothing other than CSS. Firstly it depends how you define a carousel if it’s infinite loops then I think this can’t really be achieved through CSS alone (I would be happy to be proved wrong though!) this carousel is going to be left to right only.

After doing a little research there are some experimental slideshow type examples out there using checkboxes which have an on/off state equating to our left and right actions. I figured I could use a radio group which again provides the states we need. and using :before and :after on the navigation thus keeping the whole carousel at the same level in the dom. After going off on a tangent in this direction and after cross browser testing it seems elements that don’t have content can’t have :before or :after which actually makes sense. All browsers except Chrome seem to adhere to this. Chrome is a bit naughty when it comes to development as in my view it is too lenient with the rules.

The next thing then that immediately comes to mind in order to achieve this is using the :focus on a link but how do you then select the carousel?

Well using the ~ + selectors to select the immediate siblings and as above using :before and :after on the navigation we can keep everything at the same level.

Then using some keyframe animation we can animate the carousel when the link becomes active.

This works in all current browsers except IE9, as of course IE9 does not support CSS animations, so it’s not truly rid of JS if that’s a requirement and I don’t see a CSS workaround for that other than having a blunt left/right animation.

The other aspect to this is mobile, here :focus does not have the same meaning on mobile and I had to use :hover again usually you would solve this kind of problem adding classes with JS as I only tested on Safari it may differ on other devices.

Have a look at the example carousel

Using logrotate with your Laravel projects

Hard disk filled up by logs? Often overlooked until it becomes an issue. It can become a cycle of death for a server, application errors occur > write to log file > disk space full.

There’s a simple way to overcome that, and prevent the logs growing to an un-manageable size.

Laravel 4 and 5 do support some log configuration, for example to set it to daily. But this isn’t a concrete prevention of the logs getting out of hand, for that you need the logrotate utility.

Not that this guide is laravel specific you could follow this guide on any log file your keeping.

logrotate configuration files are stored in /etc/logrotate.d (Ubuntu, though likely to be the same on most linux environments)

add your file like so:

nano /etc/logrotate.d/laravel

and it’s contents:

/var/www/project/app/storage/logs/laravel.log { 
rotate 14 
daily 
compress
maxage 14 
}

Options explained

rotate – How many files are kept before deleting old files
daily – How often to rotate the files, (this could be hourly*, weekly, daily, monthly)
compress – Yep you will want this as text files compress well
maxage – Files older than 14 days will be deleted

*hourly needs the cron job updating to run hourly, best practice would be changing it to run every 5 minutes, to allow all your configuration to run whenever it’s required.

*/5 * * * * /etc/cron.daily/logrotate

So in the above example we are having a maximum of 14 log files, compressed, rotated daily and kept for 14 days. In our log directory we should have the following files:

laravel.log
laravel.log.1.gz
laravel.log.2.gz
laravel.log.3.gz
laravel.log.4.gz
laravel.log.5.gz
etc..

There’s also some neat tricks you can do with logrotate like finding all log files, if for example we had more than one log file in the same directory.

/var/www/project/app/storage/logs/*.log { 
rotate 14 
daily 
compress
maxage 14 
}

Or you could specify multiple files

/var/www/project/app/storage/logs/laravel.log /var/www/project/app/storage/logs/debug.log { 
rotate 14 
daily 
compress
maxage 14 
}

You can also run other tasks when the log is rotated

/var/www/project/app/storage/logs/*.log { 
rotate 14 
daily 
compress
maxage 14 
postrotate
    /usr/sbin/apachectl restart > /dev/null
endscript
}

Furthermore you can also restrict the size of the files by using minsize,size and maxsize specifiying a unit in M GB or K so a production example might look like the following:

/var/www/project/app/storage/logs/*.log { 
rotate 7  
daily
compress
maxage 7 
maxsize 10M
}

We now know that the logs will take up 70MB of disk space as a maximum, it’s important to note that if you do use the any of the size options they work irrespective of the rotation frequency, so as above even if the log is set to rotate daily it will be overridden if the file grew to 10MB it would be rotated immediately.

That would also be dependent on having the cron job set up at a higher frequency as mentioned above.

 

Living in Angular Ionic RC land and the state of Frontend

For the past few months i’ve been upgrading an Ionic v1 app to v2, watching at the sidelines as a new bread of web applications take form.

As a developer trying to use these platforms it’s been incredibly frustrating, but living on the very edge does have it’s perks, bringing a deeper understanding of the frameworks, one I wouldn’t have gained from simply installing a stable release.

A friend also recently pointed me to the amusing article on hackernoon https://hackernoon.com/how-it-feels-to-learn-javascript-in-2016-d3a717dd577f#.xv563r3w7

Sadly most of it is true.

Touching on the above frameworks and other libraries, the main annoyance is that many come with build tools, great! but little guidance on integrating other tasks or extending the tools. Other libraries suffer this as well where the developer made a simple build script but doesn’t explain how to use it if you’ve got an existing project, or your simply forced into using their way of doing things.

Finally i’m near the end of the proposed development with my Ionic project, though Ionic2 still has some bugs, and a fairly decent understanding of Angular 2 workings only to find that in the coming months a major release of Angular4 will be released!

Hopefully not too many changes to come in that release.

 

Understanding CORS and Pre-Flight

Ok so this isn’t a new thing, but most developers have been thwarted by this at some point for anyone finding this I’m going to explain it in plain english as I’ve lost count of the times I need to explain these issues to someone, or yourself 🙂

cross-over

  1. CORS is a browser thing, that means using Postman, Curl whatever other means your doing a http request your only going to experience the problem in a browser as other tools do not respect CORS.
  2. Simply put it’s a set of rules that are checked by the browser in the response headers of a request, these rules determine which websites are allowed access to the resource.
  3. It does not care what framework (Angular,React,Jquery) or Vanilla JS your using to make your request, CORS issues are generally down to how it’s configured on the resource (the server) your querying.

So when making a request the browser checks what headers the server returns, this can happen before the actual request is sent in something called a pre-flight request (more on that later).

For example I make a POST request from website.co.uk to domain.co.uk and the server returns

Access-Control-Allow-Origin: http://www.domain.co.uk

In this case only domain.co.uk is allowed to access this resource.
Another example would be that it returns:

Access-Control-Allow-Origin: *

This means anyone can access this resource. Read more below for info on setting these headers and what a pre-flight request it.

The other typical CORS issue is related to the methods

response

Here only the POST method is allowed, trying to PUT or DELETE will not work, you need to configure your server (see below) on how to enable this.

What exactly is CORS for?

CORS protects the end user the client YOU from a third party making requests on your behalf that originated from a different website than the one your on it does this by checking the Origin of the request to validate where it came from, if it’s still not making sense then, there’s some more detailed examples on Stackoverflow

What is a Pre-flight request?

So many posts i’ve seen where Angular, Jquery etc describe this as a feature of the library, wrong again it’s the browser doing this not your JS library!

preflght

Pre-flight is when the browser sends an OPTIONS request before your actual request gets sent, which the server typically responds with the options available e.g POST, PUT etc.. along with that in the headers will be Access Control rules.

Here’s an example

pre-flight

As you can see the response details what methods, headers and origins are allowed in the response headers.

If your website is not in the Access-Control-Allow-Origin: part or it’s not * that means your not allowed and it will not make your actual request and instead will return an error like:

Response to preflight request doesn’t pass access control check: No ‘Access-Control-Allow-Origin’ header is present on the requested resource.

You might be asking, why don’t you see pre-flight on my request? Well this only happens for what is deemed a non-standard request.

The first part of the spec explains this https://www.w3.org/TR/cors/

It gets slightly more complicated if the resource author wants to be able to handle cross-origin requests using methods other than simple methods. In that case the author needs to reply to a preflight request that uses the OPTIONS method and then needs to handle the actual request that uses the desired method

Simple is GET, HEAD, POST without additional headers so any header modification will result in a pre-flight request.

Configuring your server to allow CORS

By default most server software e.g Apache or similar will block cross site requests, and you will need to modify the rules in your .htaccess or .conf

Apache example using .htaccess

<IfModule mod_headers.c>
    Header add Access-Control-Allow-Origin "*"
    Header add Access-Control-Allow-Headers "accept, origin, x-requested-with, content-type"
    Header add Access-Control-Allow-Methods "PUT, GET, POST, DELETE, OPTIONS"
</IfModule>

Typically the above will resolve CORS issues whilst using JS ajax requests, be careful as to what methods you need to allow.

But what are the consequences of doing this?, well there’s a good reason it’s not allowed by default see above “What is exactly is CORS for?”. Short answer is that you would generally have other protections in place for example a custom header that is submitted along with the request that only the client and the server know and you would also most likely wan’t to allow access to only a select few e.g

Access-Control-Allow-Origin: http://www.webapp.com, http://www.webclient.com

Using the above safegaurds it’s generally ok to enable it.