Howard Lince III, Director of Engineering, LenddoEFL
At LenddoEFL, we work at the intersection of big data, machine learning, and financial inclusion in emerging markets. Each of these imply a level of server sophistication that would be cripplingly difficult without Amazon Web Services (AWS). Our mission is to provide one billion people access to powerful financial products at a lower cost, faster and more conveniently. We use AI and advanced analytics to bring together the best sources of digital and behavioral data to help lenders in emerging markets confidently serve underbanked people and small businesses. To date, we have provided credit scoring, verification and insights products to 50+ financial institutions, serving seven million people. We’ve been able to manage all of this with a team of three infrastructure engineers managing 300+ servers.
We started LenddoEFL in 2011, and have scaled from operating in one country to a lean, global team operating in 20+ countries. AWS is the reason we could grow as quickly, and as lean as we did. If we needed a larger server, or to set up a stack in a new country we could do so without worrying about all the operational work involved in deploying them. AWS tools are aimed at reducing the amount of work a company or team of any size needs to do to ensure a stable service that meets client SLA’s.
What follows are my do’s and don’ts gleaned from the past eight years: DO’s 1. Start using AWS from day one. If your company has significant data or application needs, you will want to begin on AWS and scale from there. <
2. Plan ahead. AWS lets you reserve server instances ahead of time. If you know you’ll be operating a server for a long time, reserving it can save you 40 to 75 percent. Servers are expensive so it pays to think ahead. 3. Look at all the options AWS offers. AWS has a lot of options which you must explore to minimize your and your team’s workload. For example, AWS Managed Services are valuable for small, lean companies because you don’t have to manage anything. When we launch a Redis instance on AWS, we never have to look at it again. 4. Note the types of instances. AWS offers several types of server instances, which you should educate yourself on. Compute has more power per processor for example. Optimize for the use case you are deploying it for. 5. Consider a support package. Amazon support is bar-none the best. They have quick and precise turnarounds which won’t leave your business fumbling in the dark. DON’T’s 1. Give more permissions than necessary. Make sure that your team has the permissions needed, but no more than necessary. Within my team, developers have limited read only access, QA has a little more permissions, and infrastructure has the most access. When it comes to security remember to always follow the principle of minimum access required for your team members. Your team members may not be malicious but they are potential vectors for attack. 2. Leave a server behind. Pay attention to where you launch servers, and be sure not to leave them behind. It’s easy to launch a server in a region and then forget about it. AWS resources are only visible when viewing a particular region, making it even easier to forget about servers in outlying regions. Pay attention to your bill. 3. Rely on the AWS root login. Rely on SAML based login to the AWS console. AWS makes SAML IAM provisioning and integration simple. Whether you wish to use it from Google’s Gsuite, Lastpass, or your own LDAP solution, you can ensure that you’re not using a username and password to access your most critical infrastructure. SAML login means you trust the upstream resource for managing your login, which is likely audited with two factor login. 4. Test in production. Create a sub account for the sandbox to test things out. Its separate from the rest of the infrastructure which means that you cannot make a mistake which will impact your production environment from your sandbox. This is important as you may be attempting to work with AWS API calls and run a command that terminates all of your servers.