Discover the key definitions of major cloud keywords worth knowing.
A service in AWS that allows users to define and deploy infrastructure as code, using templates to create and provision AWS resources in a repeatable and automated manner.
A serverless computing service in AWS that allows users to run code without provisioning or managing servers, and pay only for the compute time consumed.
AWS Lambda is a serverless compute service provided by Amazon Web Services (AWS) that allows you to run your code in response to events and automatically manages the compute resources required by your code. It enables you to build and run applications and services without the need to provision, manage, or scale servers.
With Lambda, you can write code in several programming languages including Python, Node.js, Java, C#, Go, and Ruby, and upload it to Lambda as a function. You can then trigger the function in response to events such as changes in data in Amazon S3, updates to records in Amazon DynamoDB, or HTTP requests to Amazon API Gateway.
A software development methodology focused on iterative development, continuous feedback, and collaboration between development teams and stakeholders.
Amazon Elastic Compute Cloud (EC2) is a web service that provides resizable compute capacity in the cloud. It allows you to easily launch and manage virtual servers, called instances, in a secure and scalable computing environment.
With EC2, you have complete control over your computing resources, including the ability to select the operating system, networking and security settings, and the software you want to run. You can choose from a wide range of instance types, each with different CPU, memory, storage, and networking capacities to meet your specific requirements.
EC2 instances can be launched in multiple Availability Zones, which are distinct locations within a region, to provide high availability and fault tolerance for your applications. You can also use Amazon Elastic Block Store (EBS) to create persistent block-level storage volumes that can be attached to your EC2 instances.
An open-source automation tool that can handle configuration management, application deployment, and task automation. It uses a simple, human-readable language called YAML for its playbooks.
An Application Programming Interface (API) is a set of protocols, tools, and standards for building software applications. It specifies how different software components should interact with each other, making it easier for developers to create software that works with other systems or applications. In simpler terms, an API is a bridge that enables different software systems to communicate with each other. It allows developers to access the functionality of another software application or service and use it in their own application. APIs are commonly used to retrieve data from databases or web services, as well as to perform actions or trigger events in other applications.
The process of creating or modifying infrastructure resources based on your Terraform configuration.
Architecture diagramming is the process of creating diagrams that visually represent the architecture of a system or application. Architecture diagrams are used to communicate the design and structure of a system, including its components, interfaces, and relationships between them.
In the context of cloud computing and AWS, architecture diagramming is particularly important, as it allows you to visualize the different components of your application and their interactions with AWS services. AWS provides a number of tools and services for creating architecture diagrams, including Brainboard, AWS Architecture Center, AWS Glue DataBrew, and third-party tools like Lucidchart and Draw.io.
Artificial Intelligence as a Service (AIaaS) is a type of cloud computing service that provides access to AI technologies and tools. Here are some commonly used terms in AIaaS: AI model, Deep Learning, Natural Language Processing NLP, Machine learning, neural network, predictive analytics, reinforcement learning, speed recognition, virtual assistant, computer vision.
The automatic adjustment of computing resources based on demand, ensuring that applications and services can handle fluctuations in traffic and usage.
Availability zones are a commonly used data center architecture term that refers to isolated data center locations that act as a safeguard against unexpected outages, which can result in costly downtime. These zones are usually geographically distinct from one another, allowing businesses to distribute their infrastructure and applications across multiple locations, ensuring high availability and fault tolerance
A cloud-based identity and access management service in Azure that provides authentication, authorization, and directory services for applications and users.
A serverless computing service in Azure that allows users to run code without provisioning or managing servers, and pay only for the compute time consumed.
A phrase used to refer to the large volume of structured, semi-structured and unstructured data that is difficult to mine using traditional software and database techniques. Big data is typically characterized by 3v’s- Volume of data, Variety of data types, and the Velocity at which the data has to be processed.
Refers to technologies, practices, and applications for the collection, analysis, integration, collection, and presentation of business information. BI tools access and analyze data sets and present analytical findings in charts, graphs, dashboards, summaries, maps, and reports to offer managers, executives, and other cooperate end-users with detailed intelligence about the state of their business.
CIDR stands for Classless Inter-Domain Routing. It is a method for allocating IP addresses and IP routing in a more flexible way than the traditional system of classes A, B, and C.
CIDR allows IP addresses to be assigned and aggregated more efficiently, which helps to reduce the size of routing tables and conserve IP addresses. With CIDR, an IP address is represented as a combination of an IP address and a subnet mask, separated by a slash (/) character.
A configuration management tool that uses a Ruby-based DSL for writing infrastructure recipes. Chef utilizes a client-server model and employs cookbooks to define the desired state of infrastructure components.
IT professionals charged with building and deploying strategies, plans and applications relating to and within an organization’s increasingly complex cloud technologies.
Cloud backup is the process of backing up data to a remote, cloud-based server.
The delivery of computing services over the internet, including servers, storage, databases, software, analytics, and more.
The practice of continuously monitoring and adjusting cloud usage to minimize costs, while maintaining required levels of performance, reliability, and security.
A cloud-based programming environment that mimics a regular Integrated Development Environment (IDE), typically consisting of a code editor, debugger, compiler, and a graphical user interface (GUI) builder.
A Cloud Management Platform (CMP) is an integrated product that enables users to manage public, private, and hybrid cloud environments from a single platform. CMPs provide self-service interfaces that allow users to easily provision system images, monitor and manage their cloud resources, and optimize their workloads through established policies.
The process of moving applications, data, and workloads from on-premises or legacy systems to the cloud, enabling greater agility, scalability, and cost savings.
The continuous monitoring of cloud-based applications, infrastructure, and services, providing insights into performance, availability, and security.
An approach to building and running applications that leverages cloud computing, microservices, containers, and DevOps practices to achieve scalability, resilience, and agility.
A company that provides cloud computing services, such as infrastructure, platform, or software as a service, to customers over the internet
A set of practices, technologies, and policies to protect cloud infrastructure, data, and applications from unauthorized access, breaches, and cyber threats.
Cloud storage is a model of data storage where data is stored on remote servers that can be accessed over the internet. Instead of storing data on local hard drives or servers, cloud storage allows users to store and access data from anywhere with an internet connection.
is the process of conforming to the decisions and policies set by regulatory bodies. The policies are typically derived from internal directives, requirements and procedures, or from external laws, agreements, standards, regulations and agreements.
The process of maintaining a consistent and desired state of the infrastructure over time. Tools like Brainboard, Ansible, Puppet, Chef, and SaltStack are popular configuration management tools.
A lightweight virtualization technology that allows applications to be packaged with their dependencies, making them easy to deploy and run in any environment.
A lightweight, portable method of packaging software applications and their dependencies, allowing for greater flexibility, consistency, and portability across different computing environments.
A network of distributed servers that caches and delivers content, such as web pages, videos, and images, from the server closest to the user, reducing latency and improving performance.
The practice of continuously delivering software to production by automating the entire software delivery process, including building, testing, and deploying.
A practice where code changes are automatically deployed to production after passing through the build and testing phases. CD tools include Jenkins, GitLab CI/CD, and Brainboard's CI/CD Engine.
A development practice in which developers integrate their code changes into a shared repository frequently, allowing for early detection of integration issues. CI tools include Brainboard's CI/CD Engine, Jenkins, GitLab CI/CD, and Travis CI.
Cybersecurity refers to the practice of protecting computer systems, networks, and electronic devices from theft, damage, or unauthorized access to data and information. Cybersecurity is becoming increasingly important as we rely more and more on technology for our personal and business needs.
A person, company, or body that determines the purposes for which and the manner in which any personal data is processed. They are the manager of personal data and they instruct the processor. A data controller ideally works in their own autonomy, processing collected data via its own processes. However, In some instances, a data controller has to work with an external service or a third party. Under the GDPR, the Data Controller is responsible for ensuring that personal data that falls under their ambit complies with the regulations.
A data lake house is a unified approach to data management that combines the benefits of both data lakes and data warehouses. A data lake is a storage repository that stores raw data in its native format, while a data warehouse is a system that stores structured and transformed data for querying and analysis.
A computerized application used to support courses of action taken in a business or organization. A well-built DSS helps decision-makers compile a variety of data from several sources: documents, raw data, business models, management, and personal knowledge from employees. The DSS can either be computerized or powered by humans. In some cases, it may be a combination of both. The ideal systems analyze data and actually make decisions for the end-user.
The process of deleting infrastructure resources that were created by Terraform.
A set of practices that combines software development (Dev) and IT operations (Ops) to achieve rapid and continuous delivery of software, applications, and services.
An approach to DevOps that emphasizes the integration of security practices and tools throughout the software development and deployment process, helping to ensure that software is secure by design and by default.
A set of processes and procedures to recover data and systems in the event of a catastrophic failure, such as a natural disaster or cyberattack.
Docker is a platform for containerizing applications. It provides an open-source containerization technology that allows developers to create, deploy, and run applications in a portable, isolated environment known as a container.
A distributed computing architecture that brings computation and data storage closer to the devices and users that need it, reducing latency and improving performance.
A scalable block storage service that provides persistent storage for EC2 instances in the AWS cloud, enabling high-performance and low-latency data access.
A virtual computing environment in AWS that allows users to rent virtual machines and configure them according to their needs.
The ability of a cloud computing system to automatically scale up or down computing resources based on demand, allowing for efficient resource allocation and cost savings.
Encryption is the process of converting data into a secure, coded form called 'cipher text', which can only be read by someone who has the corresponding decryption key or password. It is the most secure way to protect information assets from unauthorized access or theft. Encryption algorithms use complex mathematical algorithms to scramble the data, making it unreadable to anyone who does not have the key to decrypt it. This helps to ensure that sensitive information, such as financial data, personal information, and confidential business data, remains protected from cyber threats and unauthorized access.
The ability of a cloud solution to add new runtime and framework support via community buildpacks.
FinOps (short for Financial Operations) is a set of practices and methodologies aimed at optimizing cloud spending and maximizing the value of cloud investments. It involves collaboration between financial teams, operations teams, and developers to manage cloud costs and ensure that spending aligns with business objectives.
Cloud computing has enabled businesses to achieve greater agility and flexibility, but it has also introduced new challenges related to cloud cost management. With FinOps, organizations can establish processes and tools to manage cloud costs and optimize cloud usage.
A diagram that represents a process or system using a series of symbols and arrows to indicate the flow of information or materials.
A continuous integration and continuous delivery tool provided by GitLab that allows you to automate the building, testing, and deployment of software applications and infrastructure.
A set of practices that combines Git-based version control systems with IaC, CI/CD, and Kubernetes to manage infrastructure and application deployments.
A design principle that ensures systems or applications are always accessible and operational, even in the event of hardware or software failures.
A cloud computing environment that combines the use of private and public cloud services to create a single, integrated infrastructure.
The property of infrastructure code that ensures that running the same code multiple times has the same result as running it once, ensuring that infrastructure is consistent and reliable.
IAM allows you to create and manage multiple IAM users within your AWS account, each with their own set of credentials for accessing AWS resources. You can also create IAM groups to organize your users, and assign permissions to those groups. IAM provides a centralized control over all of your AWS resources and their permissions, allowing you to easily manage and enforce security policies.
IAM is a critical component of any AWS deployment, as it enables you to control access to your resources, monitor user activity, and enforce security best practices. By using IAM, you can ensure that only authorized users have access to your AWS resources, and that they only have the permissions necessary to perform their intended tasks.
The practice of creating infrastructure that is designed to never change once it is deployed, instead of replacing and updating it as needed.
The process of automating the deployment, configuration, and management of infrastructure using machine-readable code.
The automated process of building, testing, and deploying infrastructure code changes to ensure that they are safe and reliable before they are deployed to production.
Infrastructure-as-Code (IaC) is an approach that involves managing and provisioning IT infrastructure through machine-readable definition files, rather than manual configuration processes. It promotes the idea of treating infrastructure as software, making it easier to maintain, version, and share.
A cloud computing model in which computing resources, such as servers, networks, and storage, are provided as a service over the internet.
An integrated development environment (IDE) is an application that provides a programming environment for developers. An IDE typically includes a code editor, automation tools, and a debugger.
The Internet of Things (IoT) is a network of physical objects that are equipped with unique identifiers, such as IP addresses, and the ability to transfer data over a network without human intervention. This network extends beyond traditional computing devices, such as computers, smartphones, and tablets, to a diverse range of devices that utilize technology to interact with and communicate with the environment in an intelligent manner, through the internet.
An open-source container orchestration platform that allows you to automate the deployment, scaling, and management of containerized applications.
A device or software that distributes network traffic evenly across multiple servers to improve performance, reliability, and scalability.
HCP provides fully managed services, taking care of tasks such as patching, updates, and backups, allowing users to focus on their core business activities.
A software architecture pattern in which applications are composed of loosely coupled, independently deployable services, enabling greater scalability, agility, and resilience.
formerly known as windows Azure is Microsoft’s cloud computing platform. It offers both IaaS and PaaS services.
A reusable collection of Terraform resources that can be shared and reused across multiple projects.
A strategy of using multiple cloud providers, services, or platforms to avoid vendor lock-in, improve resilience, and optimize cost and performance.
The ability of a cloud computing system to host multiple users or tenants on the same infrastructure, while ensuring data and application isolation.
A technology that allows multiple virtual networks to run on a single physical network, enabling greater flexibility, security, and isolation.
A type of data storage that stores data as objects, which are accessed through unique identifiers rather than file paths, providing scalability, flexibility, and cost-effectiveness.
On Premise technology is software or infrastructure that is run on computers on the premises (in the building) of the person or organization using the software or infrastructure.
Open Source refers to a development model in which a product's source code is made available to the public, allowing anyone to view, modify, and distribute the software. This model promotes collaborative community development and rapid prototyping, as developers can work together to improve the product and add new features. Open source products have become increasingly popular in the field of cloud computing, with OpenStack and CloudFoundry being two examples of open source cloud computing platforms. These platforms provide a framework for building and managing cloud environments, allowing organizations to deploy and manage cloud-based applications and services with greater flexibility and control.
OCI stands for Oracle Cloud Infrastructure, which is a cloud computing platform that offers a comprehensive set of infrastructure services, including computing, storage, networking, and database services. OCI is designed to meet the needs of enterprise customers, providing a highly scalable, secure, and reliable platform for deploying and managing cloud-based applications and services.
A value that is generated by Terraform during the execution of your configuration and can be used as an input to other Terraform configurations.
A dry-run of the infrastructure changes that Terraform is going to apply. This allows you to see what changes will be made before they are actually applied.
A cloud computing model in which a platform for building and deploying applications, such as an operating system, middleware, and development tools, is provided as a service over the internet.
A private cloud is a cloud computing environment that is owned and operated by a single organization or entity, such as a company or government agency. Unlike public clouds, which are owned and operated by third-party providers like AWS or Google Cloud, a private cloud is typically hosted on-premises or in a dedicated data center.
A cloud computing environment that is owned and operated by a single organization and is not shared with other organizations.
A plugin that Terraform uses to interact with APIs of different cloud providers such as AWS, GCP, or Azure to create, modify, or delete resources.
A plugin that allows Terraform to execute scripts or run commands on a resource after it has been created.
The process of setting up the required infrastructure components, such as servers, storage, and networking, for a software application or service.
A cloud computing environment that is owned and operated by a third-party cloud service provider and provides services to multiple customers over the internet.
A configuration management tool that allows you to define the desired state of your infrastructure using a declarative language. Puppet uses a domain-specific language (DSL) and enforces the desired state through a client-server model.
A pricing model offered by some cloud providers that allows customers to reserve compute capacity in advance, in exchange for a lower price.
A configurable component of your infrastructure, such as a virtual machine, a network interface, or a security group, that can be managed by Terraform.
A model of cloud computing that allows developers to build and run applications without managing servers or infrastructure, enabling greater agility and cost savings.
A database service that provides a fully managed, scalable, and pay-as-you-go model for storing and accessing data without managing servers or infrastructure.
HCP provides SLAs for availability and performance, ensuring that users have predictable and reliable access to their infrastructure resources.
An object storage service in AWS that allows users to store and retrieve large amounts of data, with durability, scalability, and security features.
A cloud computing model in which software applications are provided as a service over the internet.
A pricing model offered by some cloud providers that allows customers to bid on unused compute capacity, in exchange for a lower price, but with the risk of potential interruptions.
then upgrade if you want more deployment capabilities or instant support .
Design first