codedamn shifted to Next.js – Case Study
Next.js is an amazing piece of technology which allows you to build truly SEO friendly pages and great performance experience in for users. In this article, I would like to take you through a series of decisions made here at codedamn and why we decided to shift to Next.js, and what Next.js brings to the table.
Journey from React
Before Next.js, codedamn was built using React.js as the core technology. Some of the underlying things included the following:
Material UI
Webpack
Babel
TypeScript
Monaco (for our code runner)
And tons of custom code/packages
Although React was a great choice, we suffered from the same fate which every SPA suffers from on the web – JavaScript overload.
Common problems with SPA on production
This is not limited to React, it is a problem in any framework which uses JavaScript to populate important content on your page, specifically the visible content of the page.
The SEO of the website is hurt badly. Search engines are optimized to read webpages as HTML, and if you use JavaScript to render HTML, search engines like Google delay reading your website for a long time, and worse, other search engines like Bing and Yahoo might probably give up completely – leading to a loss of organic traffic.
Performance issues – Because you’re sending a big chunk of JavaScript down the throat of less powerful devices like small mobile phones or older laptops, this might increase displaying of your content on their devices by a factor of 3-5 seconds easily. Combine that with a slow internet connection and a moderately busy server, and you’ve created a perfect bad experience for a user. The only thing worse than no internet, is slow internet.
No matter what anybody says about problems with SPA architecture, the only two problems which are really unfixable are the above two with only SPA in place. With codedamn, this was a big issue for us because we want to reach out to a large audience to help them to learn to code, and for that, we need all the support we can. Leaving out search engines out of the equations was no option.
Pre-rendering pages
The first instinct for us was to pre-render pages. This simply means that you use a headless browser architecture like puppeteer and try to visit all (or at least important) pages on your website and capture the rendered HTML output. This is the actual snapshot of the DOM which is created after the JavaScript on the page is parsed, and hence is the real preview of what your user sees.
At codedamn, we tried to pre-render our pages using this service: prerender.io
This SaaS can sit in between your server and client and can smartly detect if a bot is accessing your website (in our case, googlebot or any search engine crawler). If we detect a bot, prerender.io would spin off a headless browser, visit the page, get the rendered HTML and send the response back to the bot. For regular users, we can operate as normal.
But this solution isn’t perfect, and to be honest, a little dirty to maintain too. We were looking something permanent and solid, and for a long time we could see Server Side Rendering (SSR) was something inevitable for codedamn, as services like prerender.io fix only the part of the problem. You still have performance issues, and even if you enable pre-rendered pages for everyone, there’s a lot of setup and maintenance then, where it is very easy to forget about the core business problem and just be lost in the technical side of things.
Introducing Next.js
Making a shift to Next.js was a bold decision because we had a huge existing infrastructure on React.js alone. But it was worth it. Around July 15th 2020, we decided to move the infrastructure from DigitalOcean based servers running Node.js at backend and React.js at frontend to the following:
We shifted the whole infrastructure to an almost static Next.js architecture.
We shifted codedamn infrastructure from DigitalOcean due to cost savings and caching benefits we’ll get elsewhere.
We shifted codedamn’s Next.js infrastructure (which included static pages and Next.js lambda functions to Vercel) – the company which created and is maintaining Next.js at the moment
For GraphQL, we shifted the architecture to AWS Lambdas. This seemed like the best, scalable decision for the platform for now.
We also shifted our database to MongoDB Atlas.
The good of Next.js
So, in a span of 3 days, we decoupled our architecture into not only many small services, but also across different platforms altogether. This brought its own cons, but way more pros. Sticking to Next.js, here’s what we found out.
The sites were blazingly fast now – thanks to the edge caching Vercel provides and the Automatic Static Optimization built into Next.js
Codebase migration was not very difficult. There were some file pattern changes, but those were quickly adopted thanks to our code tests in place checking for functionalities.
Incremental caching – We don’t have to invalidate the whole cache when a small change is made to codebase somewhere (thanks to Next.js 9.5)
Static incremental generation – This is something I love about Next.js. What is better than super fast static webpages? Super fast static webpages which can also update dynamically in the background. This enables us to deploy complete blogs on Next.js (like the one you’re reading) with this feature.
We were finally able to host a dynamic blog on a path (codedamn.com/news which you’re reading right now) even though these blog pages you see are statically rendered. We’re using Ghost as a headless CMS, but this topic is for some other day.
We got immediate SEO benefits, correct status codes (like 404 for not found pages, which was not possible earlier with React) without any external SaaS product sitting in between monitoring our each request.
The complete infrastructure (including lambda functions) is managed by Vercel, which is a big relief as a platform engineer. That means more focus on great code and product, and less focus on the underlying hardware.
Scaling, caching, and deployment stages are also automated and manageable thanks to Vercel one more time.
The bad of Next.js
There’s nothing bad with the framework as such! With the advancements of Next.js, you almost don’t need to opt out of their built in mechanism at all. As of the time of the writing (Next.js 9.5), here are some painpoints you might face:
If you have a huge codebase, it might take some time for you to completely migrate the thing, because Next.js enforces file-based routing pattern which is something unique.
Next.js brings in first class support for Sass and CSS modules, but you cannot customize it out of the box, you have to opt out of Next.js Sass and CSS processing, and setup your own sass processor, if you want to have support for fine tuned configuration. In our case, we were migrating a codebase which used babel-plugin-react-css-modules as a babel plugin to transpile styleName as a string to associated className attribute. This is a small change in the configuration but we had to bring in our full custom configuration to implement this. This is not bad really, it’s just a place where there’s some scope of improvement
There’s nothing much on the bad side of Next.js though. If you’re a React developer, and you’re serious about your projects, shifting to Next.js should be a no-brainer. The company Vercel, maintaining Next.js, is doing a great job at that and the framework will only thrive and improve over the time.
Sharing is caring
Did you like what Mehul Mohan wrote? Thank them for their work by sharing it on social media.
No comments so far
Curious about this topic? Continue your journey with these coding courses: