Freedom from downtime and latency with our dedicated bare metal servers.
Handle heavy real-time workloads with unparalleled speed and performance.
Bare metal performance with the flexibility of the cloud.
Effective server-side and tech-agnostic cheat detection.
Scaling game instances with automated global orchestration.
Low-latency game server management for a flawless experience.
Custom tools to detect, intercept, and deflect impending attacks.
Transfer data on a global scale fast, private, and secure.
Reach eyeball networks through meaningful peering relationships.
Go global with our custom and secure privately-owned data center.
In 2016, we had the opportunity to work with the team of Tom Clancy’s The Division. The collaboration was so successful, we reunited for the sequel, The Division 2, developed by several Ubisoft studios across the world including Reflections, Leamington, Red Storm, Bucharest, Sofia, Shanghai, and Annecy.
We decided to give you a full insight into how we worked together with the lead studio, Massive Entertainment. Over a frosty coffee in Sweden, we chatted with Marian Tonea, IT Production Manager in charge of the game development team, and Jan Harasym, Senior Online Infrastructure Engineer with The Division 1 and 2.
(Marion) Because the game depends on that. All the players want to have a good connection, right? If you have outages or incidents during the launch, or even during the post-launch phase of the game, players will have a bad experience. So you want to have a very stable infrastructure.
(Jan) The stability of the infrastructure – power and networking – is obviously incredibly crucial. The availability of hardware is also a very strong factor. We want consistent hardware because we want a consistent experience across the planet. The expertise of the engineers and access to those engineers is paramount. We’re delivering a game as a service, it is not something we can just throw over to someone else. It is a collaborative effort. So, that is incredibly important.
(Jan) Usually, when we are choosing a hosting solution it is a joint collaboration. We try and get in as soon as possible. As soon as the project has been green-lit, we’ll have developers come to us in an advisory fashion. Like, asking what kind of things I can assume about the infrastructure, how much bandwidth can I get, what type of CPUs can I think of. We work together to figure out what is feasible over time.
(Jan) From an infrastructure perspective, before we launch a game like this, at the very first, we’re going to have a small footprint in every major region. So think about the divisions of the world. We’ll run some technical tests, like a feasibility study to make sure our concepts are good. Then we outsource to some gamers and get them playing the game, checking our infrastructure. After that, we run more tests. As we get closer to launch we will think about having an Alpha on platforms like Xbox and PlayStation. The next stage would be to have a smaller environment for play, to certify the game. Once the game is certified, we ramp up and give the game to more players to check bugs, bandwidth, and CPU usage. Then we have two stages in Beta, closed and open. Closed Beta is given to people who are a little more favorable (who will not bash us on social media when they encounter problems) to verify that the infrastructure is going to handle the number of players. It’s also the last chance to find any major bugs. When we pass closed Beta we open it up to a huge number of players, comparable to the launch. That is the absolute last moment we can find bugs. At that point, we have a very short window but we can start provisioning more servers for launch. That is where the flexibility of i3D.net comes in incredibly well.
(Jan) One of our first goals was to use the new provisioning API, and we worked with i3D.net to extend it to support another operating system, FreeBSD. We also wanted to use the managed firewalls. Because the networking expertise of i3D.net was so good, it was obvious to us that they could manage our firewalls. We felt like we could trust them. Furthermore, we were very confident in i3D.net’s ability to provide a good experience for players. We wanted to ensure that the maximum number of people were using i3D.net servers so we only had to burst into the cloud when absolutely necessary.
(Jan) So, I can speak as to why we chose i3D.net because I was part of the original team working in 2015 and 2016 for The Division 1. It was actually quite an intensive process trying to choose a provider we could trust, to be honest. We were always an online game, one of the first within Ubisoft, so it was a big deal for us. Ultimately, we found the collaboration between i3D.net and Massive was incredibly fluid and we had unified goals. The expertise was second to none. It was best-in-class networking expertise. That really drove quite a lot of the decisions. Consistent hardware, the same platform across the whole world… No one in the whole company had anything negative to say about i3D.net.
(Jan) In terms of dealing with the volume of players, we wanted to have a baseline [capacity] addressed with physical servers at i3D.net. Then small events or bursts of anticipated players were addressed with an auto-scaling mechanism in the cloud, which we developed. We spawned those players in the cloud every time we needed it.
(Marian) To start with, we had a very good collaboration with i3D.net during the development, launch, and post-launch of The Division 1. So that was the first stage where we decided that i3D.net was the right collaboration Massive needed to have further on. During the post-launch of The Division 1, we set up a communication channel for questions or issues we saw in our monitoring systems. i3D.net was always responsive and brought the right people to address these issues. During the launch [of The Division 2], we knew that we wanted to have people from i3D.net to help us launch the game as with The Division 1. This time, i3D.net sent two people to our studio to help us during the launch. They were embedded in our team and I think part of the success of the launch was due to them being present in our office in Sweden.
(Marian) In The Division 2 we added a few extra locations, especially in regions where we noticed high latency. Those regions are the Middle East and South America. Overall, we have 10 locations across the world.
(Marian) Overall, we did very well, because the launch was a success, especially from the infrastructure perspective because we didn’t have any major incidents.
(Jan) The results were excellent. We served all the players we wanted to serve in the time we wanted to serve them in. We didn’t have any people waiting in queues; we had no major outages nor disconnections. Everything was incredibly smooth and well-run. We’re thrilled with the results.
Subscribe to our Newsletter and stay up to date with the latest from i3D.net