Presentation of the projects dedicated to manage the provisioning and orchestration of delivered CYBNITY infrastructure module versions.
The management of several types of modules is approached with an integrated vision that mix the best practices proposed by each used tool, into a global common and compatible architecture of the source codes projects.
To avoid risks (e.g large modules are slow, insecure, difficult to understand/test/review), some modularity requirements are respected for development and maintenance at a Production-grade infrastructure modules:
Several assembly projects are managed regarding the packaging of modular and deployable systems relative to independent and autonomous containerized systems of security features, and/or integrated applicative modules (e.g aggregation of CYBNITY domain’s features).
See the production-grade infrastructure checklist that define “what are the requirements for going to production?”.
See the logical deployment view documentation for more detail on the design requirements that describe the environments, infrastructures and operating conditions required to install, activate and operate the systems safely.
See systems and tooles prerequisites documentation for detail about the resources requirements relative to the tools and systems used for delivery and/or operating of CYBNITY environments.
The CYBNITY systems are application components (e.g web backend app, standalone executable java app) that are containerized as deployable and executable Systems.
Some applicative systems are containerized as Docker images. To do this, a Dockerfile is managed in each application project (e.g a domain backend server sub-project) that is based onto a reusable Docker template maintained in the containers
folder.
Perimeter: the deployable application components (e.g Java autonomous application) are containerized as Docker images automatically generated by the Maven tool (e.g during build process supported by the Commit stage). Each application component (initiallly packaged as an executable binary like Jar app or Node.js app) is encapsulated into a dedicated Docker image.
Project type: Maven or Node.js implementation structures; dockerization auto-generated (e.g Dockerfile generation, image template tagging, push to Docker repository) via Maven plugin.
Description: each java application project contains a configuration of its containerization (e.g docker plugin activation into its pom.xml
file, and a src\main\docker\xxx-assembly.xml
defining specific jar and files assembled into the application system generated as Docker image). Each application system generated is an extended JRE-TEE docker image, ready for start as an autonomous system. Help documentation about the used maven plugin is available here.
Example of a reactive-messaging-gateway
application system project structure:
reactive-messaging-gateway
├── src
│ └── main
│ ├── java
│ │ └── <<java source code packages>>
│ │ └── ... (.java files)
│ └── docker
│ └── shaded-artifact-docker-assembly.xml
├── pom.xml
└── target
├── classes
│ └── <<compiled java class packages>>
├── docker
│ └── <<name of a docker image auto-generated as application system template (e.g reactive-messaging-gateway)>>
│ └── build
│ ├── Dockerfile (<< Docker generated template>>)
│ └── maven
│ └── <<.jar packaged application component ready for image start>> (e.g reactive-messaging-gateway-0.1.0-fat.jar)
└── ...
The infrastructure projects governs the provisioning management at the Kubernetes level as an Infrastructure-As-Code implementation.
Perimeter: packaged applicative or infrastructure modular systems as Services (Kubernetes service) that can be executed (e.g capability to be deployed into an environment).
Project type: Terraform implementation structure.
Description: main sub-folder is named services
which contain a sub-folder per any type of modularized system supported by Infrastructure-As-Code implementation. An implemented module can represent a deployment module of an infrastructure Service (e.g Kubernetes service, CYBNITY application service) or System (e.g a cluster of backend application systems, a availability zone implemented by a Kubernetes Node).
Perimeter: generic, reusable and standalone modules for deploying infrastructure modules (e.g a public load balancer).
Project type: Terraform implementation structure.
Description: some specific sub-folders are created per type of generic technical module type (e.g all the modules relative to the networking activities are hosted into a networking
folder) regarding generic/standalone modules (e.g only one instance of alb between several area or cluster of frontend application modules) is supported by a dedicated sub-folder named by its logical name (e.g alb regarding a Application Load Balancer as network module name
). The goal is to define one time the generic/reusable modules which are not clusterized.
Perimeter: generic, reusable and standalone modules implementing the deployment of clusterized infrastructure/applicative modules (e.g a cluster of multiple CYBNITY application modules).
Project type: Terraform implementation structure.
Description one specific sub-folder is created for each type of clusterized module (e.g named specifically as a function name equals to a suffix -rolling-deploy
of the managed targeted module) into the cluster
sub-folder. For example, a terraformed deployment module (e.g Auto Scaling Group module deployed in a rolling deployment approach the webserver-cluster of a application module, named asg-rolling-deploy
) can do a zero-downtime rolling deployment betwen several instance of an applicative or infrastructure module.
Perimeter: configuration setting projects of Environments (e.g Kubernetes clusters) that can be activated, operated, monitored and restored regarding packaged Services.
Project type: Terraform implementation structure.
Description: main folder is modules which contain a sub-folder per environment name (e.g local dev, integrated dev, test, staging & QA, live) equals to a Terraformed module where sub-modules can be operated. One configuration file (e.g .tf, .yaml) is implememented per cluster managed in the source codes repository including labeled deployment. One cluster defined by environment with fault tolerance and HA.
localhost
and activated by default during Java component build life cycledev-deploy-environment
and activable during build life cycle with property environment=dev-deploy
qa-environment
and activable during build life cycle with property environment=qa
The integrated terraformed modules are managed via the standard project structure:
modules
├── live-env
│ ├── cluster
│ │ ├── <<clusterized module name>>-<<function>>
│ │ │ ├── main.tf
│ │ │ ├── outputs.tf
│ │ │ └── variables.tf
│ │ └── <<clusterized module name>>
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ ├── networking
│ │ └── <<network module name>>
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ └── services
│ └── <<CYBNITY application module/service name>>
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
├── qa-env
├── dev-env
└── local-env
Corresponding test and examples structures are implemented and should be equals to the implemented modules allowing to make multiple tests for each module, witch each example showing different configurations and permutations of how that module can be used.
test
├── qa-env
│ ├── cluster
│ ├── networking
│ └── services
├── dev-env
└── local-env
examples
├── qa-env
│ ├── cluster
│ │ └── <<clusterized module name>>-<<function>>
│ │ ├── one-instance
│ │ │ ├── main.tf
│ │ │ ├── outputs.tf
│ │ │ └── variables.tf
│ │ ├── auto-scaling
│ │ │ ├── main.tf
│ │ │ ├── outputs.tf
│ │ │ └── variables.tf
│ │ ├── with-load-balancer
│ │ │ ├── main.tf
│ │ │ ├── outputs.tf
│ │ │ └── variables.tf
│ │ └── custom-tags
│ ├── networking
│ └── services
├── dev-env
└── local-env
Several support resources are installed and provided by CYBNITY for allow deployment of environment modules during the build activities.
See the example of harware and virtualized systems as support infrastructure implemented by CYBNITY project, that can be considerated like an example for companies which would like to create a resources infrastructure supporting development of CYBNITY extension, or test/optimization of CYBNITY software suite versions, and/or providing of production environment to final users (e.g a company’s security team; or a provider of managed platform as SaaS offer to other companies’ security teams).