Setting up a hospitality business model on AWS

Reading Time: 5 minutes

A capstone project by Sajal Biswas and Shreya Sharma

Use Case: Accommodation options in the travel industry are not limited to hotels and resorts. People often look for homestay options as this model benefits both the parties. Tourists can enjoy home-like comfort while owners can earn reasonable revenues on the rent.

Introduction:

We have taken the Airbnb business model as a reference, and we have analyzed how to utilize AWS cloud services so that business only need to focus on their model.

We are following ‘server-less architecture’ for our proposed solution. Serverless architectures help in significantly reducing operational cost, complexity, and engineering lead time, at the price of increased reliance on the vendor. 

Architecture:cloud computing capstone project

CICD Architecture:

cloud computing capstone project

Tech stack used:

– ReactJs for creating the web application using AWS AMPLIFY

– Profile Management using AWS COGNITO

– ChatBot using AWS LEX and AWS AMPLIFY

– Static website hosting on S3 bucket

– CLOUDFRONT for CDN

– Code repository in CODECOMMIT

– Backend API’s using Lambda functions(in Python) which will be triggered via API Gateway

– AWS ElastiCache for efficient Search functionality

– DynamoDB database for storing data in key-value pairs

– Static files like images are kept in an S3 bucket

– CloudWatch Alarms are being used for monitoring purpose

– AWS SES service to send emails to customers

– AWS Pinpoint and Athena for analytics purpose

Case Studies:

  1. Without provisioning Infrastructure, load balancing and less cost, how can we develop API, as fast as business needs to launch in the market?

For this requirement, Serverless architecture is the best choice. So, we have implemented the same so that business need not worry about Infrastructure changes and management.

  1. What if a user wants to track email user communication and process the data based on reply?

Enterprise solutions not only want the business to send promotional emails, contact services but also interested in user replies and track user communication as well. AWS SES is implemented for this feature, though we have integrated only sending email using Lambda function, other features can also be explored.

  1. The design approach for Search and Listing Properties on website

We have considered that a large amount of data will be generated, hence transaction would be huge as well, so we have chosen Dynamo DB. We are maintaining property list by partition key as <propertyCode>_<stateCode>_<pinCode> so that we can easily search, and whenever a huge request comes in, then it should split up in such a way that hot partition key issue does not arise.

  1. Efficient Search functionality using AWS ElasticSearch.

We are using AWS ElasticSearch for saving a record along with DynamoDB. We have also created Lambda function for collecting transaction data from DynamoDb and create a CSV file in S3 bucket which will be used from Athena for analytics purpose.

  1. Is it possible to increase customer interaction, instantly? 

We have integrated LEX ChatBot with basic functionalities.

  1. What would be a good approach for User Profile Management?

The initial thought was to use AWS RDS service for this, but later we used managed service for this which is AWS Cognito.

  1. Analytics from Business Perspective.

Currently, we have used below services for analytics purpose:

– Aws pinpoint

– Athena Query 

Technical Details:

Website hosting with API integration:

We have developed a static website using React Js and AWS Amplify. This website is hosted on S3 bucket and Cloudfront is integrated for caching and CDN.

– User Registration, Login, Password Management, Logout and Session management using AWS Cognito.

– LEX Chatbot for basic functionalities

– Integration with backend API’s deployed on API Gateway. We have consistent response JSON format i.e. ArrayList of objects

– AWS pinpoint for tracking user activity on the website

Deployment:

Repository Management: Website repository is maintained using AWS Code Commit.

CI/CD: We have used AWS Code Pipeline for website deployment

API deployment: All Backend API’s are deployed in API gateway integrated with AWS Lambda and we have created the dev stage environment for the same.

Monitoring and Metrics:

We have used Cloud Watch logs and Metrics for debugging and monitoring purpose using various tags.API’s and Database:

We have created API’s using AWS Lambda as backend. All functions are written in the Python environment. 

Although neither of us has expertise in Python, we learnt about it in the PGPCC course. 

Library:

We have used PIP package manager for installing boto3 for python.

API Endpoints:

All Lambda functions are exposed through API gateway as a POST request, wherein we have used “action” field in the body so that based on this field, API can respond accordingly.

Services details: 

We have created the following services:

Product Management Service:

We have created 3 functionalities by querying DynamoDb database/ ElasticSearch

– Create product

– Get All product

– Get all product by state

For the same functionality, based on the “es_service” flag in the body, we decide whether to call DynamoDb or ElasticSearch

Transaction Management Service:

We have created 3 functionalities by querying DynamoDb

– Create Transaction

– Get All transaction and by UserID

– Transaction by date for a particular UserId.

Transaction Analytics service which will gather all transaction data and dump into s3 as CSV file where we can query the data using Athena.

Conclusion:

Serverless computing offers several advantages over traditional cloud-based or server-centric infrastructure. For many developers, serverless architectures offer greater scalability, more flexibility, and quicker time to release, all at a reduced cost. With serverless architectures, developers do not need to worry about purchasing, provisioning, and managing backend servers.

We have observed the following advantages while working on this capstone project:

– No server management is necessary

– Developers are only charged for the server space they use, reducing cost

– Serverless architectures are inherently scalable

– Quick deployments and updates are possible

– Code can run closer to the end-user, decreasing latency

Authors’ Bio:

Shreya Sharma – Shreya is an AWS Certified Solutions Architect and is currently working as Senior Software Developer with Hexaware Technologies Pvt Ltd. in Mumbai. She has a particular interest in all things related to AWS Cloud, migration from on-premise to Cloud & Backend API. She has 8 years of extensive work experience in designing and developing Full Stack Applications on cloud and on-premise both.

Sajal Biswas – Sajal is passionate about cloud computing development and architecting cloud migration projects with backend API development. He is an OCA 7(java), CSM, Mule ESB certified professional and is currently working with Capgemini as a software consultant in Mule ESB technology. He has a total experience of 6.7 years including extensive experience in API integration.

 

Experts Talk Series: Cloud Security Demystified

Reading Time: 5 minutes

Episode 2 – Cloud security overview

Cloud computing is a dynamic platform with continuous provisioning and de-provisioning of on-demand resources based on utility and consumption. It has caused considerable confusion on how this is both different and similar to conventional architectures and how this impacts information and application security from a technical standpoint.

Cloud security should be thought of not only from the perspective of the “physical” location of the resources but also the ones managing them and consuming them. It follows what is known as the shared responsibilities model, where the responsibility is shared between the customer and the cloud provider. 

But how do you know which responsibilities belong to you and which belong to the cloud provider? A good rule of thumb would be to break down the security aspect into the various dimensions first.

  1. Applications: This includes access control, application firewalls, and transactional security
  2. Information: This consists of the aspects of database encryption and monitoring
  3. Management: This includes patch management, configuration management, and monitoring
  4. Network: This holds firewalls and Anti-DDOS measures
  5. Compute and Storage: The focus here is on Host-based firewalls, Integrity, and file-log management
  6. Physical: This includes physical data centre security

After you have identified these aspects in your application, they can be mapped to your cloud provider to check which controls exist and which do not. The division of the responsibilities of these dimensions will depend on the classification of your cloud implementation based on the “SPI model”, i.e. SaaS, PaaS or IaaS.

Read Episode 1: Migrating to the cloud

SPI Model

Software as a Service(SaaS) – In this implementation, the customer is given the use of a software or application deployed and managed by the provider on a cloud infrastructure, which cannot be controlled or managed by the customer apart from limited customization and configuration options based on special requirements. 

The user has the responsibility of managing access to applications and dictates policies on who has access to which resources. For example, an employee from the sales team may have access to only data from the CRM application, someone from the academic team may only have access to the LMS, etc. The rest of the cloud stack is the responsibility of the cloud provider including infrastructure and the platform.

Platform as a Service(PaaS) – This enables the customer to build, deploy and manage applications to the cloud using programming languages and tools supplied by the cloud provider. The organization can deploy applications without having to manage the underlying hardware and hosting capabilities.

The cloud provider takes the responsibility of securing the platform provided and all stacks below it. The customer has the responsibility of securing the developed application and all the access to these applications. It is also recommended that customers encrypt all application data before storing it on the cloud providers platform and plan for load balancing across different providers or across geographical regions in case of an outage.

Infrastructure as a Service(IaaS) – The cloud provider delivers computing infrastructure along with storage and networking needs via a platform virtualization service. The customer can then run and deploy applications and software on the infrastructure as per their need.

The responsibility of the underlying hardware along with all used storage and networking resources falls with the cloud provider. The customer is responsible for putting controls in place regarding how virtual machines are created and who has access to the machines to keep costs in control and reduce wastage of resources.

 

Recommended practices

– Encrypt data before migrating: Your cloud provider will do everything it can to make sure the data you have uploaded is secure, however, the application as such may not be infallible. If the data contains private information which should not be found by a third party, it needs to be encrypted before storing and/or uploading.

– Take care of data security (at rest): This can primarily fall under the following categories

– Encrypt your data: All cloud providers will have some encryption systems in place to protect your data from rogue usage. Make sure these systems are in accordance with your organization’s policies. For security reasons, you may also want to manage the encryption keys yourself rather than let your provider do it; check whether this service is available. 

– Protect your keys: Some providers will allow you to manually handle encryption keys in the form of hardware security modules (HSM). This will place the responsibility of managing the keys on the customer but allows for better control. Also, you will certainly be issued SSH and API keys for access to various cloud services. These should be stored securely and protected against unauthorized access. Remember, if the keys are compromised, there is likely nothing your provider can do to help you!

– Data that is deleted stays deleted: Redundancy systems used by cloud providers often replicate data to maintain persistence. As such, sensitive data can often find it’s way into logging systems, backups, and management tools. It’s highly recommended to be familiar with the cloud deployment system to keep track of where your data may have ended up.

– Secure your data in transit: Firewalls, network access control solutions, and organizational policies should be in place to make sure that your data is safe against malware attacks or intrusions. For example, policies should also be set up to automatically encrypt or block sensitive data when it is attached to an email or moved to another cloud storage or external drive. This can be made easier by categorizing and classifying all company data, no matter where it resides, to maintain easier access control.

– Unauthorized cloud usage: Strict policies will need to be set up to ensure that employees can only access the resources that they should. Similar measures will need to be put in place to regulate the number of virtual machines being run and make sure those machines are spun down when not in use.

Every cloud provider will have its own governance services to manage resource usage. It is highly recommended that an in-house cloud governance framework is put in place.

– Keep an Audit trail: Cloud ecosystems run on a pay-as-you-go basis and can rack up huge bills and lead to considerable wastage when not used properly. Therefore, tracking the use of cloud resources is very important. Your cloud provider will likely have a system in place to generate audit trails, but if your cloud implementation is spread across multiple providers, creating an independent in-house audit trail becomes important. A Cloud Service Broker solution will be able to assist you in this by monitoring resource usage and identify vulnerabilities and rogue users, which brings us to the next point.

– Ask your provider: Your cloud provider will have numerous manuals and whitepapers describing best practices to follow for various implementations. Make sure to take advantage of them!

 

Cloud Security is a tricky jungle to navigate, but by following some simple guidelines and best practices, you can ensure that your organization’s data and applications are safe and rest easy. To read more on Cloud Computing click here

Experts Talk Series is a repository of articles written and published by cloud experts. Here we talk in-depth about cloud concepts, applications, and implementation practices.  

Experts Talk Series: Migrating to the cloud

Reading Time: 5 minutes

Episode 1 – Cloud migration

Migrating to the cloud is a buzzword these days. Every enterprise wants to say that they are “100% cloud-enabled”. If you are an enterprise looking to move over to the cloud, how should you go about it?

First off, let’s just clarify that “100% cloud-enabled” is a myth. Most enterprises will have a portion of their business running in their own datacenter, also known as on-premise. Therefore, a better way to quantify cloud enablement would be “100% of all applications that have been found fit for the cloud have been migrated”.

How to decide if you really need to migrate?

To get the process off the ground, the first thing you have to decide is whether the cloud is the right fit for your use-case. If your application landscape consists of legacy code or is highly optimized for the hardware it is being run on, it is safe to say the cloud will do more harm than good. But, if your application comprises of a set of loosely coupled components, each being a small highly specialized hardware-independent function, these seem like ripe candidates for a cloud-based server-less implementation.

There should also be a good reason for this endeavour. Change for change’s sake does not always equal to progress. The pros and cons of a cloud-based infrastructure must be taken into account, along with factors like cost and manpower requirements and whether they can be met.

So you want to migrate. What’s next?

Have you decided that you want to jump into the cloud? If so, let’s venture together into the labyrinth of choices you will have to make during this journey.

First, you will have to look at various business dimensions while contemplating your cloud implementation. For example, immediate cost benefits will be highest on IaaS implementations, after a lift and shift of on-premises applications to the cloud. Likewise, other dimensions like time to market, functional responsiveness, and scaling have to be taken into consideration and a balance has to be found. This will help you to decide if your implementation will be IaaS, PaaS or SaaS-based. Perhaps a combination may yield the best results.

The next step is app evaluation. As mentioned earlier, it is necessary to check which applications are fit for the cloud. Low-risk applications from a business perspective can be safely migrated. However, an enterprise may feel more secure storing trade secrets, proprietary functionality, and security services on local servers. Let this be noted though, on-premise servers do not guarantee 100% security any more but cloud providers do. As a matter of fact, cloud providers take security very seriously and take strong measures to make sure that you know exactly where, and by whom your data is being accessed. Also, only authorized users can access your data.

You may be on the fence about migrating certain services, like client-server applications and supporting functions. For such cases, an ROI analysis will help you decide. Please note that on-premises implementation allows the enterprise to take advantage of financial levers like depreciation. In the end, let me emphasize that these decisions are highly case-specific and are not cast in stone. 

An application in an enterprise is hardly ever standalone. Hence, you will have to go through various levels of integration. The usual options are synchronous and asynchronous integration. The on-premises data centre can be integrated with the cloud to create a hybrid cloud deployment topology. This means the cloud applications can access the on-premises applications directly, though a bit of latency will be at play. Maybe asynchronous or batch-based integration will help hide the latency.

The migration process 

It is a myth that cloud migration is a single-step process. As mentioned earlier, the first step is usually a lift-and-shift approach. This is where the existing on-premises architecture is cloned onto the cloud. This relieves the enterprise of the burden of maintaining a data centre, but that is all the benefits you’ll ever get from this approach. After that, gradually, some of the functionality can be re-engineered to take advantage of managed cloud services, such as a database can be moved over to a cloud-provided database. Then there is the concept of cloud-native applications, where new components or functionality can be designed from the get-go to take advantage of platform-specific services built for media, analytics, or content distribution. This way the workload on the enterprise is reduced until you can be solely responsible for the business processes while letting the cloud handle the heavy lifting.

The next step is to choose a cloud provider. Your hired or in-house Cloud expert can help you make an informed decision from myriad choices available to you. Which of these is suitable for you is highly situational, and requires you to take several factors into consideration, like cost, software or platform requirements, compliance requirements, and geographical zone availability. You may also want to take advantage of a specific API or managed service offered by a service provider. It should be noted that most of the top cloud providers have a nearly similar set of services, so if you don’t have any highly specialised requirements, you cannot go wrong with either of them.  

The on-premises setup then has to be restructured to fit the cloud architecture. Your cloud provider will definitely have a list of reference architectures available based on real-life use-cases and a list of best practices to follow, including but not limited to data and application migration tools. They also have an extensive collection of white papers to aid you in this task.

Implementing the migration plan

The above discussion concludes the planning and selection stage of cloud migration. All that is left now is to implement the plan. This should begin with drawing up and implementing a proof of concept. Not only will this allow you to run performance comparisons with your existing application, but it will also highlight unforeseen challenges and complexity levels which may show up during the actual migration process, allowing you to be prepared for the same. This will also give you a good idea of the reliability of the chosen cloud provider and will allow you to evaluate its support system.

While performing the actual migration, you should be careful to minimize the resulting disruption time and service outages. Dry runs should be conducted to identify potential failure points and minimize errors during the process.  Every use case will have its own set of steps to follow during the migration, but it generally starts by taking a backup of the databases, followed by the deployment of applications, and migrating the database. Also, there will be quite a few application components to manage and set up, like middleware, caching, warehousing, and file systems. All these components must be planned and mapped to the relevant cloud service. Don’t forget to set access roles and policies! Make sure you have a clear idea of who should be able to access your applications and which components they can access, then assign appropriate roles for them. Parallel deployments of the application in the cloud and on-premises must be performed to check performance and detect failures.

Benchmarking tests are a must. This will let you know how your cloud application runs in comparison to your on-premises setup and will allow you to fine-tune your setup and be sure if it is ready for deployment.

Congratulations! You have successfully migrated to the cloud. As mentioned before, cloud migration is not a goal but a journey. Every new application will have to be evaluated whether it is a better fit for cloud or on-premises implementation. If it is destined for the cloud, integration with other applications that may still be on-premises will have to be taken into account. As new services are released by the provider, existing on-premises applications will have to be re-evaluated to see if they can take advantage of those new services. 

As you can see, this journey is not easy, but once it has been completed, just sit back and watch the clouds do their magic! But with regular management and prompting from you of course!

Experts Talk Series is a repository of articles written and published by cloud experts. Here we talk in-depth about cloud concepts, applications, and implementation practices.