Isomorphic JavaScript: What is it and what can I do with it?
JavaScript, a language built to work on the client, in a browser, to make a website more interactive. Use Javascript to react to user's input, send XHR requests to PHP (or Rails/Java/etc.), receive data from the server, and complete a task with the provided data. This is the way Javascript has been used for a long, long time. But then, in 2008, NodeJS was launched. NodeJS, most web developers have heard of it, is a JavaScript framework running on the server. This means that Javascript is not just on the client side any more, it can also be a full fledged server. This has many benefits, including the following: it's blazing fast, the front-end and backend use the same language, and code can easily be shared between the front-end and backend. But what does this really mean?
A library called React
Well to answer that question, let's use a front-end Javascript library as an example to be used next to Node for the server. Let's call this library ReactJS. ReactJS is a library created by Facebook to easily build user interfaces, through the use of Components. This means that you can easily make reusable components like a navigation bar, provide it with information from the server, like menu items, and render it on the screen. This is nice and well, but how does this answer the question? Well ReactJS comes with ways to convert the components within a view to strings. This means that NodeJS can serve this string as a response to requests to it's server. This can be useful for three different things.
SEO
With Frameworks like AngularJS the JavaScript won't be executed once a crawler hits your website. This causes misinterpreted meta tags, titles, content, and images. There is a solution for this, but it's complicated and just plain annoying. You're going to have to use PhantomJS to render the pages once a crawler hits your site and serve a static HTML version of the requested page. This is slow if this page is hit for the first time, because the page needs to be rendered on the fly. Once this is done, it is cached and the problem is not as apparent, but it's still a bottleneck for web applications built with AngularJS. Here's where ReactJS shines. Because the content of views can very easily be converted to strings, NodeJS can serve these static pages when the specified URL is requested. This doesn't just happen wehen a crawler hits the page, it happens all the time. This means that Google, Facebook, or any other service that uses a crawler to grab page information, will always be served with a static HTML page with all the required information.
Page content of page refresh
Besides making it easy for crawlers to read the page content, NodeJS also helps with page refreshes. Imagine the following scenario. You made a React application with react routing. You hit the index page and everything is perfect. You can navigate between views and everything works perfectly fine. BUT THEN the user decides, for some reason, to refresh the page on the about page of your React application. A 404 page will be presented. But I made a route for the about page, why is it giving me a 404 page? Well for the simple reason that the entrance of you React application is under /. This means that unless you are on the home page and refresh, you will get a 404 page, because the root of your application can't be found. In AngularJS this can be solved by always pointing all page requests to the index.html page of your application and prepending the rest of the requested URL to the request in the Angular router. In React, in combination with Node, this is much, much simpler. What you can do through Node is to render the requested React view to a string, and simply serve this string as a response, just like how the SEO works. Because this time the crawler isn't the one requesting the page, but the user is, the browser will automatically render the HTML and the user will be presented with the right page. Once this HTML is rendered by the browser, React will automatically be kick started and ready for new requests and user actions.
Loading speeds through caching
Last but not least, loading speeds of pages can be drastically improved. Because NodeJS creates an HTML string on every page refresh, it can be very easily cached. This way Node can just look in the server memory and see if a cached version of the page exists. When it does, it can return this cached version instead of rendering the React view on the fly. Of course you should always set a maximum time between caches of pages, because otherwise it could be possible that your fancy updated pages will never actually be presented to the user and all your work will be for nothing. A good time guideline for pages that change often could be a few hours to a day. Other pages can be cached for a week or two. A good average is to cache pages for one day at a time, to make sure users get the updated experience soon enough, while still benefitting from the faster loading times of pages.
Conclusion
So what does it mean to share code between the server and the front-end? Well it means that user experiences are smooth, responds times are low, and implementing new features can be almost instantanious. There is no need to write the same logic twice (which I catch myself doing a lot in Angular), because the code for the front-end and backend is exactly the same. Because the code is exactly the same, SEO can be done easily, through server-side rendering, pages are always available, even after page refreshes, and page reloads can be made incredibly quick through page caching. Using the same language all across the application is quick, convenient, and it makes developing a delight, because you only need to know one language for everything.
Posted on: November 17th, 2016I help you achieve great SEO, higher conversions, and help you grow your business