This is a standalone backend authentication plugin for use with HashiCorp Vault.
This plugin allows for applications running in Cloudfoundry using Instance Identity to authenticate with Vault.
Background
CF
Cloud Foundry can be enabled to provide instance credentials to its apps (Enabling Instance Identity).
Once enabled, Cloud Foundry injects into each app’s file system a unique x.509 certificate and RSA private key (referenced by $CF_INSTANCE_CERT and $CF_INSTANCE_KEY respectively).
This key-pair is updated regularly and automatically by Cloud Foundry, with a relatively short certificate TTL (eg 24 hours).
The public certificate contains fields identifying the app’s:
Vault provides a mechanism to create custom authentication plugins. These plugins typically authenticate users through use of cryptography and an established form of trust.
The plugin runs as a separate process on the same host as the main Vault process. Communication is via gRPC.
A common difficulty when using Vault is how to bootstrap applications with the required credentials needed to authenticate. This can introduce complexity and security concerns
into CI/CD pipelines and/or other deployment mechanisms, since they then become responsible for distribution of credentials.
Design
A Vault authentication plugin that would allow Cloud Foundry apps to authenticate using the Cloudy Foundry Instance Identity system. Apps would not need to be bootstrapped with
credentials. Instead, they would use the certificates provided by Cloud Foundry to authenticate themselves and gain access to Vault secrets.
Authentication Flow
A Vault admin enables the Cloud Foundry Vault authentication plugin, and configures it to trust the Cloud Foundry certificate authority. This is a one-time configuration step.
The Cloud Foundy app generates a JWT token. The token contains the app’s public certificate using the x5c field. The token
is signed using the app’s private key. The JWT token is signed using the app’s CF key pair.
The app makes an authentication request to Vault. Included in the request is the JWT token created in the previous step.
Vault recognizes the request as a Cloud Foundry authentication request and delegates to the plugin. The plugin processes the request and validates the following:
App’s certificate was signed by the configured CA
App’s certificate has not expired
App’s certificate has proper key usages set
App’s certificate contains the expected Cloud Foundry specific attributes (Org, Space, and Instance GUIDs)
The JWT token was signed using the app’s public/private key pair
Once the plugin validates the provided JWT, it looks up the Vault policies to assign to the user. This can be done a number of ways.
In either case, the plugin is able to perform the lookup using its knowledge about the request:
The app includes a role in its request. A Vault admin can then create a mapping of roles to policies. The roles can be scoped to Cloud Foundry Orgs, Spaces, or Instances
A Vault admin creates a mapping of Cloudy Foundry Orgs, Spaces, and Instances to Vault policies
The plugin informs Vault that it approves or denies the token request. When approved, the plugin provides the Vault policies from the previous step and a sensible TTL, likely based on the certificate’s TTL.
Vault then creates an appropriately scoped token and returns it to the app. The app now uses the token to read and write secrets.
Starting in v2, dotenv is no longer bundled as a dependency. While we still recommend using .env files for configuration, you’ll need to set up dotenv yourself in your project.
Create a .env file with:
OPENCAGE_API_KEY=YOUR-OPENCAGE_DATA_API_KEY
Here are examples:
CommonJS
require('dotenv').config();// or add `key` as an input parameter of the function geocodeconstopencage=require('opencage-api-client');opencage.geocode({q: 'lyon'}).then((data)=>{console.log(JSON.stringify(data));}).catch((error)=>{console.log('error',error.message);});
ESM
import'dotenv/config';// or add `key` as an input parameter of the function geocodeimportopencagefrom'opencage-api-client';opencage.geocode({q: 'lyon'}).then((data)=>{console.log(JSON.stringify(data));}).catch((error)=>{console.log('error',error.message);});
Typescript
This example does not use dotenv and specify the API key as input parameter
import{geocode}from'opencage-api-client';importtype{GeocodingRequest}from'opencage-api-client';asyncfunctiongeocode(){constinput: GeocodingRequest={q: '51.952659,7.632473',// The API Key value from process.env.OPENCAGE_API_KEY is overridden by the one provided belowkey: '6d0e711d72d74daeb2b0bfd2a5cdfdba',// https://opencagedata.com/api#testingkeysno_annotations: 1,};constresult=awaitgeocode(input);console.log(JSON.stringify(result,null,2));}
Browser
The browser version is built using UMD notation. Modern browser can use the ESM version, here the example use the legacy JS notation.
The proxy URL parameter (useful to hide your API key)
Error handling
API can return errors like invalid key, too many requests, daily quota exceeded, etc. Those errors are thrown as Javascript Error by the geocode function. The error object contains the same status object as the OpenCage API.
Assuming the catch statement uses error as variable name:
TypeScript verifies that your program uses the right type as you write code, avoiding potential issues at runtime.
but, By using any, you expose yourself to issues that are difficult to trace and debug, especially once the code is deployed in production.
when we cannot determine the type because we don’t know the result of that library and fetched data.
we need to use any type for them.
So, if you dont want to use any type. you can use assert-util-types.
⚙️ Installation
case: use npm
$ npm install assert-util-types
case: use yarn
$ yarn add assert-util-types
case: use pnpm
$ pnpm install assert-util-types
📝 Usage
Nominal
nominal types is preventing confusion between two types. In regular Typescript you run into this problem:
typeUser={id: numbername: string}typeAdmin={id: numbername: string}constmike: Admin={id: 1,name: 'mike'}constintroduceMe=(props: User):string=>{const{id, name}=props;return`No.${id}, User is ${name} !`}// oh... compilation was successful.introduceMe(mike);// we can use User and Admin in the same way.
but Nominal type solve this problem
import{Nominal}from'assert-util-types';typeuserId=Nominal<number,'userId'>typeadminId=Nominal<number,'adminId'>typeUser={id: userIdname: string}typeAdmin={id: adminIdname: string}constmike={id: 1,name: 'mike'}asAdminconstintroduceMe=(props: User):string=>{const{id, name}=props;return`No.${id}, User is ${name} !`}// That's great! get an error! introduceMe(mike);// Argument of type 'Admin' is not assignable to parameter of type 'User'.
user defined type guard
asSomething functions to create the more complex type checks.
this functions passed the value in a value field upon success, or provides detailed error messages upon failure.
import{asFilledString}from'assert-util-types';asFilledString('hello','target');// ok asFilledString('','empty string');// error message is "empty string should have least 1 character" asFilledString(1,'target');// error message is "target should have least 1 character"
asFilledString
import{asFilledString}from'assert-util-types';asFilledString('hello','target');// 'hello'asFilledString('','empty string');// error message is "empty string should have least 1 character" asFilledString(1,'target');// error message is "target should have least 1 character"
import{assertNumber}from'assert-util-types';assertNumber(1,'target');// okassertNumber(NaN,'NaN');// error message is "NaN should be number"
asNumber
import{asNumber}from'assert-util-types';asNumber(1,'target');// 1asNumber(true,'target');// 1 asNumber('hello','NaN');// TypeError: Cannot convert hello to number
import{assertFilledArray}from'assert-util-types';assertFilledArray(['string','number'],'target');// okassertFilledArray([],'empty array');// error message is "empty array should have least 1 item"
KubeVirtBMC unleashes the out-of-band management for virtual machines on Kubernetes in a traditional way, i.e., IPMI and Redfish. This allows users to power on/off/reset and set the boot device for virtual machines. It was initially designed for Tinkerbell/Seeder to provision KubeVirt virtual machines, but as long as your provisioning tools play nicely with IPMI/Redfish, you can use KubeVirtBMC to manage your virtual machines on Kubernetes clusters.
Install KubeVirtBMC with Helm. Optionally, you can specify the image repository and tag, e.g., --set image.repository=starbops/virtbmc-controller --set image.tag=v0.4.1:
# Install the chart from the remote repository
helm repo add kubevirtbmc https://charts.zespre.com/
helm repo update
helm upgrade --install kubevirtbmc kubevirtbmc/kubevirtbmc \
--namespace=kubevirtbmc-system \
--create-namespace
# Or, install the chart locally, with the bleeding-edge image, i.e., `starbops/virtbmc-controller:main-head`
git clone https://github.com/starbops/kubevirtbmc.git
cd kubevirtbmc/
helm upgrade --install kubevirtbmc ./deploy/charts/kubevirtbmc \
--namespace=kubevirtbmc-system \
--create-namespace \
--set=image.tag=main-head
Project Description
KubeVirtBMC was inspired by VirtualBMC. The difference between them could be illustrated as below:
flowchart LR
client1[Client]
client2[Client]
BMC1[BMC]
VM[VM]
subgraph KubeVirtBMC
direction LR
client2-->|IPMI/Redfish|virtBMC-->|K8s API|VM
end
subgraph VirtualBMC
direction LR
client1-->|IPMI|vBMC-->|libvirt API|BMC1
end
Loading
Goals
Providing a selective set of BMC functionalities for virtual machines powered by KubeVirt
Providing accessibility through the network to the virtual BMCs of the virtual machines
Non-goals
Providing BMC functionalities for bare-metal machines
Providing BMC accessibility outside of the cluster via LoadBalancer or NodePort type of Services
KubeVirtBMC consists of two components:
virtbmc-controller: A Kubernetes controller manager built with kubebuilder that reconciles on the VirtualMachineBMC and other relevant resources
virtbmc: A BMC emulator for serving IPMI/Redfish requests and translating them to native Kubernetes API requests to get information from the virtual machine or even take actions on it
Below is the workflow of KubeVirtBMC when a VirtualMachine was created and booted up:
Take a peek at the VirtualMachineBMC CRD (Custom Resource Definition):
// Condition type constant.const (
ConditionReady="Ready"
)
// VirtualMachineBMCSpec defines the desired state of VirtualMachineBMC.typeVirtualMachineBMCSpecstruct {
// Reference to the VM to manage.VirtualMachineRef*corev1.LocalObjectReference`json:"virtualMachineRef,omitempty"`// Reference to the Secret containing IPMI/Redfish credentials.AuthSecretRef*corev1.LocalObjectReference`json:"authSecretRef,omitempty"`
}
// VirtualMachineBMCStatus defines the observed state of VirtualMachineBMC.typeVirtualMachineBMCStatusstruct {
// IP address exposed by the BMC serviceClusterIPstring`json:"clusterIP,omitempty"`// List of current conditions (e.g., Ready)Conditions []metav1.Condition`json:"conditions,omitempty"`
}
Getting Started
Prerequisites
Go version v1.24.0+
Docker version 28.5+.
kubectl version v1.32.0+.
Access to a Kubernetes v1.32.0+ cluster.
KubeVirt v1.6.0+.
Develop
Build and push the images:
export PUSH=true
make docker-build
# For building multi-arch images
make docker-buildx
NOTE: These images ought to be published in the personal registry you specified. And it is required to have access to pull the images from the working environment. Make sure you have the proper permission to the registry if the above commands don’t work.
Install the CRDs into the cluster:
make install
Run the controller locally
export ENABLE_WEBHOOKS=false
make run
Generate the Redfish API and server stubs
[!NOTE] This section is only necessary if you want to change the Redfish schema version.
Download the Redfish schema from the DMTF website:
make download-redfish-schema
Normally, the OpenAPI spec file hack/<REDFISH_SCHEMA_BUNDLE>/openapi/openapi.yaml is the one you need. Copy it and modify it, make sure the changes are reflected in the file hack/redfish/spec/openapi.yaml. Then generate the code with openapi-generator:
make generate-redfish-api
The generated code will be placed in the pkg/generated/redfish directory.
[!NOTE] You might also need to adjust the adapter and handler code because they are coupled with the Redfish schema at some degree.
Deploy the Manager to the cluster with the image specified by IMG:
# Use the latest image at main-head
make deploy
# Or checkout to a specific branch/tag
git checkout <branch/tag>
make deploy
# Or specify the custom-built image
make deploy IMG=<some-registry>/virtbmc-controller:<tag>
[!NOTE] If you encounter RBAC errors, you may need to grant yourself cluster-admin privileges or be logged in as admin.
Interact with Virtual BMCs
Set up the virtual BMC
In order to have a functioning virtual BMC, the virtual machine and the BMC credentials refered by the to-be-created VirtualMachineBMC resource need to be created first. Here is an example of them:
Behind the scenes, KubeVirtBMC automatically creates the dedicated Pod and Service to provide the virtual BMC functionality for the virtual machine you specified. You can verify it by running the get command with the label:
$ kubectl get pods,services -l kubevirt.io/virtualmachinebmc-name=test-vmbmc
NAME READY STATUS RESTARTS AGE
pod/test-vm-virtbmc 1/1 Running 0 37s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/test-vm-virtbmc ClusterIP 10.53.220.67 <none> 623/UDP,80/TCP 37s
Access virtual BMC via IPMI
To access the virtual BMC via IPMI, you need to be in the cluster network. Run a Pod that comes with ipmitool built in:
kubectl run -it --rm ipmitool --image=mikeynap/ipmitool --command -- /bin/sh
Inside the Pod, you can for example turn on the virtual machine via ipmitool:
# Get the power status of the virtual machine
$ ipmitool -I lan -U admin -P password -H test-vm-virtbmc.default.svc.cluster.local power status
Chassis Power is off
# Turn on the virtual machine
$ ipmitool -I lan -U admin -P password -H test-vm-virtbmc.default.svc.cluster.local power on
Chassis Power Control: Up/On
# Wait a few seconds and then get the power status again
$ ipmitool -I lan -U admin -P password -H test-vm-virtbmc.default.svc.cluster.local power status
Chassis Power is on
Access virtual BMC via Redfish
To access the virtual BMC through the Redfish API, you can use curl:
kubectl run -it --rm curl-redfish --image=curlimages/curl --command -- /bin/sh
Inside the Pod, you can operate the virtual machine via Redfish APIs:
# Get the Redfish ServiceRoot
$ curl -L http://test-vm-virtbmc.default.svc/redfish/v1
{"@odata.context":"/redfish/v1/$metadata#ServiceRoot.ServiceRoot","@odata.id":"/redfish/v1","@odata.type":"#ServiceRoot.v1_16_1.ServiceRoot","AccountService":{"@odata.id":"/redfish/v1/AccountService"},"AggregationService":{},"Cables":{},"CertificateService":{},"Chassis":{"@odata.id":"/redfish/v1/Chassis"},"ComponentIntegrity":{},"CompositionService":{"@odata.id":"/redfish/v1/CompositionService"},"Description":"ServiceRoot","EventService":{"@odata.id":"/redfish/v1/EventService"},"Fabrics":{},"Facilities":{},"Id":"","JobService":{},"JsonSchemas":{},"KeyService":{},"LicenseService":{},"Links":{"ManagerProvidingService":{"@odata.id":"/redfish/v1/Managers/BMC"},"Sessions":{"@odata.id":"/redfish/v1/SessionService/Sessions"}},"Managers":{"@odata.id":"/redfish/v1/Managers"},"NVMeDomains":{},"Name":"ServiceRoot","PowerEquipment":{},"ProtocolFeaturesSupported":{"DeepOperations":{},"ExpandQuery":{}},"RedfishVersion":"1.16.1","RegisteredClients":{},"Registries":{"@odata.id":"/redfish/v1/Registries"},"ResourceBlocks":{},"ServiceConditions":{},"SessionService":{"@odata.id":"/redfish/v1/SessionService"},"Storage":{},"StorageServices":{},"StorageSystems":{},"Systems":{"@odata.id":"/redfish/v1/Systems"},"Tasks":{"@odata.id":"/redfish/v1/Tasks"},"TelemetryService":{"@odata.id":"/redfish/v1/TelemetryService"},"ThermalEquipment":{},"UUID":"00000000-0000-0000-0000-000000000000","UpdateService":{"@odata.id":"/redfish/v1/UpdateService"}}
# Log in by creating a session
$ curl -i -X POST -H "Content-Type: application/json" http://test-vm-virtbmc.default.svc/redfish/v1/SessionService/Sessions -d '{"UserName":"admin","Password":"password"}'
HTTP/1.1 201 Created
Content-Type: application/json; charset=UTF-8
Location: /redfish/v1/SessionService/Sessions/337bf6b2-e4c7-41c8-bfe4-fe3ee3ce40f2
X-Auth-Token: 55f88d07289cf1207b7b967f1823f5b28e08c8977f6c742f8175274afb214c93
Date: Wed, 18 Dec 2024 15:27:04 GMT
Content-Length: 225
{"@odata.id":"/redfish/v1/SessionService/Sessions/1","@odata.type":"Session.v1_7_1.Session","Actions":{},"Id":"337bf6b2-e4c7-41c8-bfe4-fe3ee3ce40f2","Links":{"OutboundConnection":{}},"Name":"User Session","UserName":"admin"}
# Get the System resource
$ curl -H "X-Auth-Token: 55f88d07289cf1207b7b967f1823f5b28e08c8977f6c742f8175274afb214c93" http://test-vm-virtbmc.default.svc/redfish/v1/Systems/1
{"@odata.context":"/redfish/v1/$metadata#ComputerSystem.ComputerSystem","@odata.id":"/redfish/v1/Systems/1","@odata.type":"#ComputerSystem.v1_22_0.ComputerSystem","Actions":{"#ComputerSystem.AddResourceBlock":{},"#ComputerSystem.Decommission":{},"#ComputerSystem.RemoveResourceBlock":{},"#ComputerSystem.Reset":{"target":"/redfish/v1/Systems/1/Actions/ComputerSystem.Reset","title":"Reset"},"#ComputerSystem.SetDefaultBootOrder":{}},"AssetTag":"","Bios":{},"Boot":{"BootOptions":{},"BootSourceOverrideEnabled":"Disabled","BootSourceOverrideMode":"Legacy","BootSourceOverrideTarget":"Hdd","Certificates":{}},"BootProgress":{},"Certificates":{},"Composition":{},"Description":"Computer System","EthernetInterfaces":{},"FabricAdapters":{},"GraphicalConsole":{},"GraphicsControllers":{},"HostWatchdogTimer":{"FunctionEnabled":false,"Status":{},"TimeoutAction":""},"HostedServices":{"StorageServices":{}},"Id":"1","IdlePowerSaver":{},"IndicatorLED":"Unknown","KeyManagement":{"KMIPCertificates":{}},"LastResetTime":"0001-01-01T00:00:00Z","Links":{"HostingComputerSystem":{}},"LogServices":{},"Manufacturer":"KubeVirt","Memory":{},"MemoryDomains":{},"MemorySummary":{"Metrics":{},"Status":{},"TotalSystemMemoryGiB":0},"Model":"KubeVirt","Name":"default/test-vm","NetworkInterfaces":{"@odata.id":"/redfish/v1/Systems/1/NetworkInterfaces"},"OperatingSystem":"/redfish/v1/Systems/1/OperatingSystem","PartNumber":"","PowerState":"Off","ProcessorSummary":{"Count":0,"Metrics":{},"Status":{}},"Processors":{},"SKU":"","SecureBoot":{},"SerialConsole":{"IPMI":{},"SSH":{},"Telnet":{}},"SerialNumber":"000000000000","SimpleStorage":{"@odata.id":"/redfish/v1/Systems/1/SimpleStorage"},"Status":{},"Storage":{"@odata.id":"/redfish/v1/Systems/1/Storage"},"SystemType":"Virtual","USBControllers":{},"UUID":"00000000-0000-0000-0000-000000000000","VirtualMedia":{"@odata.id":"/redfish/v1/Systems/1/VirtualMedia"},"VirtualMediaConfig":{}}
# Set the boot device to PXE
$ curl -i -X PATCH -H "Content-Type: application/json" -H "X-Auth-Token: 55f88d07289cf1207b7b967f1823f5b28e08c8977f6c742f8175274afb214c93" http://test-vm-virtbmc.default.svc/redfish/v1/Systems/1 -d '{"Boot":{"BootSourceOverrideTarget":"Pxe","BootSourceOverrideEnabled":"Continuous"}}'
HTTP/1.1 204 No Content
Content-Type: application/json; charset=UTF-8
Date: Wed, 18 Dec 2024 15:54:09 GMT
# Start the virtual machine
$ curl -i -X POST -H "Content-Type: application/json" -H "X-Auth-Token: 55f88d07289cf1207b7b967f1823f5b28e08c8977f6c742f8175274afb214c93" http://test-vm-virtbmc.default.svc/redfish/v1/Systems/1/Actions/ComputerSystem.Reset -d '{"ResetType":"On"}'
HTTP/1.1 204 No Content
Content-Type: application/json; charset=UTF-8
Date: Wed, 18 Dec 2024 15:59:25 GMT
# Reboot the virtual machine
$ curl -i -X POST -H "Content-Type: application/json" -H "X-Auth-Token: 55f88d07289cf1207b7b967f1823f5b28e08c8977f6c742f8175274afb214c93" http://test-vm-virtbmc.default.svc/redfish/v1/Systems/1/Actions/ComputerSystem.Reset -d '{"ResetType":"ForceRestart"}'
HTTP/1.1 204 No Content
Content-Type: application/json; charset=UTF-8
Date: Wed, 18 Dec 2024 16:02:49 GMT
# Stop the virtual machine
$ curl -i -X POST -H "Content-Type: application/json" -H "X-Auth-Token: 55f88d07289cf1207b7b967f1823f5b28e08c8977f6c742f8175274afb214c93" http://test-vm-virtbmc.default.svc/redfish/v1/Systems/1/Actions/ComputerSystem.Reset -d '{"ResetType":"GracefulShutdown"}'
HTTP/1.1 204 No Content
Content-Type: application/json; charset=UTF-8
Date: Wed, 18 Dec 2024 16:05:30 GMT
# Log out by deleting the session
$ curl -i -X DELETE -H "X-Auth-Token: 55f88d07289cf1207b7b967f1823f5b28e08c8977f6c742f8175274afb214c93" http://test-vm-virtbmc.default.svc/redfish/v1/SessionService/Sessions/337bf6b2-e4c7-41c8-bfe4-fe3ee3ce40f2
HTTP/1.1 204 No Content
Content-Type: application/json; charset=UTF-8
Date: Wed, 18 Dec 2024 16:06:12 GMT
You can even attach/detach an ISO image to the virtual machine with the Redfish virtual media function:
# Insert virtual media to the virtual machine
$ curl -i -X POST -H "Content-Type: application/json" -H "X-Auth-Token: 55f88d07289cf1207b7b967f1823f5b28e08c8977f6c742f8175274afb214c93" http://test-vm-virtbmc.default.svc/redfish/v1/Managers/BMC/VirtualMedia/CD1/Actions/VirtualMedia.InsertMedia -d '{"Image": "https://releases.ubuntu.com/noble/ubuntu-24.04.3-live-server-amd64.iso", "Inserted": true}'# Get virtual media status
$ curl -i -H "X-Auth-Token: 55f88d07289cf1207b7b967f1823f5b28e08c8977f6c742f8175274afb214c93" http://test-vm-virtbmc.default.svc/redfish/v1/Managers/BMC/VirtualMedia/CD1
{"@odata.context":"/redfish/v1/$metadata#VirtualMedia.VirtualMedia","@odata.id":"/redfish/v1/Managers/BMC/VirtualMedia/CD1","@odata.type":"#VirtualMedia.v1_6_3.VirtualMedia","Actions":{"#VirtualMedia.EjectMedia":{},"#VirtualMedia.InsertMedia":{}},"Certificates":{},"ClientCertificates":{},"ConnectedVia":"URI","Description":"Virtual Media","Id":"CD1","Image":"https://releases.ubuntu.com/noble/ubuntu-24.04.3-live-server-amd64.iso","ImageName":"","Inserted":true,"MediaTypes":["CD","DVD"],"Name":"Virtual Media","Status":{},"WriteProtected":false}
# Eject virtual media from the virtual machine
$ curl -i -X POST -H "Content-Type: application/json" -H "X-Auth-Token: 55f88d07289cf1207b7b967f1823f5b28e08c8977f6c742f8175274afb214c93" http://test-vm-virtbmc.default.svc/redfish/v1/Managers/BMC/VirtualMedia/CD1/Actions/VirtualMedia.EjectMedia -d '{}'
Under the hood, KubeVirtBMC’s Redfish virtual media function is backed by KubeVirt’s DeclarativeHotplugVolumes feature and CDI DataVolume. As a result, you need to enable the feature gate and have CDI installed in the cluster as prerequisites. For each virtual machine that you want to use the virtual media function, its VirtualMachine resource must have a CD-ROM disk defined as a stub for volume hotplug. For instance:
...devices:
disks:
- cdrom: # The cdrom stub must exist before using the virtual media functionbus: sataname: cdrom # The name of the CD-ROM disk can be any...
Expose the Redfish API to external
Due to the nature of the Redfish API, you can expose the Redfish service to the outside of the cluster with the aid of Ingress controllers. What’s more, you can use cert-manager to issue a certificate for the Redfish service.
Here, we will use the self-signed issuer type as an example (please note the security implications; for more details, see https://cert-manager.io/docs/configuration/selfsigned/). To do so, you can to create an Issuer resource in the namespace same as the VirtualMachineBMC resource:
Next, create an Ingress resource (assuming you have an Ingress controller, e.g., nginx-ingress, installed) for each VirtualMachineBMC resource you want to expose:
Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Localhost ssl: Generate Self-signed SSL Certificate with guide mkcert repository.
docker compose up -d
then reloading for webserver ssl configuration
docker container restart webserver
The containers are now built and running. You should be able to access the Drupal installation with the configured IP in the browser address. https://example.com.
For convenience you may add a new entry into your hosts file.
Portainer
docker compose -f portainer-docker-compose.yml -p portainer up -d
manage docker with Portainer is the definitive container management tool for Docker, Docker Swarm with it’s highly intuitive GUI and API.
You can also visit https://example.com:9001 to access portainer after starting the containers.
Usage
You could manage docker containers without command line with portainer.
Here’s a quick reference of commonly used Docker Compose commands
docker ps -a # Lists all containers managed by the compose file
You should see the “Drupal installation” page in your browser. If not, please check if your PHP installation satisfies Drupal’s requirements.
https://example.com
if you should see the “The website encountered an unexpected error. Please try again later.” in your browser, run drush cache:rebuild in drupal container.
add or remove code in the ./php-fpm/php/conf.d/security.ini file for custom php.ini configurations
You should make changes custom host configurations ./php-fpm/php-fpm.d/z-www.conf then must restart service, FPM uses php.ini syntax for its configuration file – php-fpm.conf, and pool configuration files.
add and/or remove drupal site folders and files with any ftp client program in ./drupal folder.
You can also visit https://example.com to access website after starting the containers.
Webserver
add or remove code in the ./webserver/templates/nginx.conf.template file for custom nginx configurations
Create ./drupal/sites/default/files/services.yml inisde default folder and add the below code in it.
services:
# Cache tag checksum backend. Used by redis and most other cache backend
# to deal with cache tag invalidations.
cache_tags.invalidator.checksum:
class: Drupal\redis\Cache\RedisCacheTagsChecksum
arguments: ['@redis.factory']
tags:
- { name: cache_tags_invalidator }
# Replaces the default lock backend with a redis implementation.
lock:
class: Drupal\Core\Lock\LockBackendInterface
factory: ['@redis.lock.factory', get]
# Replaces the default persistent lock backend with a redis implementation.
lock.persistent:
class: Drupal\Core\Lock\LockBackendInterface
factory: ['@redis.lock.factory', get]
arguments: [true]
# Replaces the default flood backend with a redis implementation.
flood:
class: Drupal\Core\Flood\FloodInterface
factory: ['@redis.flood.factory', get]
All necessary changes to sites/default and sites/default/settings.php have been made, so you should remove write permissions to them now in order to avoid security risks.
You can add your own custom config.inc.php settings (such as Configuration Storage setup) by creating a file named config.user.inc.php with the various user defined settings in it, and then linking it into the container using:
./phpmyadmin/config.user.inc.php
You can also visit https://example.com:9090 to access phpMyAdmin after starting the containers.
The first authorize screen(htpasswd;username or password) and phpmyadmin login screen the username and the password is the same as supplied in the .env file.
backup
This will back up the all files and folders in database/dump sql and html volumes, once per day, and write it to ./backups with a filename like backup-2023-01-01T10-18-00.tar.gz
can run on a custom cron schedule
BACKUP_CRON_EXPRESSION: '20 01 * * *' the UTC timezone.
A utility that pings a range of nordVPN servers and return the servers with the fastest response
Help Output:
python nordPing.py [-h] [-c PING_COUNT] [-n TOP_N] [-C COUNTRY_CODE] [-L LOWER_RANGE] [-U UPPER_RANGE] [-p PROCESSES] [--version]
This script will ping the NordVPN servers and return the ones with the fastest response times
optional arguments:
-h, --help show this help message and exit
-c PING_COUNT, --ping_count PING_COUNT
Number of pings to send to each server (Default: 1)
-n TOP_N, --top_n TOP_N
Number of fastest responses to return (Default: 3)
-C COUNTRY_CODE, --country_code COUNTRY_CODE
Country code for the servers to ping (Default: us)
-L LOWER_RANGE, --lower_range LOWER_RANGE
Lower range of the servers to ping (Default: 5500)
-U UPPER_RANGE, --upper_range UPPER_RANGE
Upper range of the servers to ping (Default: 5502)
-p PROCESSES, --processes PROCESSES
Number of processes to use (Default: 5)
--version show program's version number and exit
Settings:
-------------------------------
Ping count: 3
Country code: us
Lower range: 9372
Upper range: 9390
Parallel Processes: 8
The 5 fastest responses are:
-------------------------------
- us9373.nordvpn.com: 17.9 ms
- us9382.nordvpn.com: 18.0 ms
- us9378.nordvpn.com: 18.3 ms
- us9385.nordvpn.com: 19.3 ms
- us9379.nordvpn.com: 19.5 ms
Contributions:
Contributions are welcome. Fork the repo, make your changes, create a diff file, and email the diff file and your GitHub username to luis@moraguez.com. If the changes are approved, you will be added as a contributor to the repo.
Donations:
If this utility helped you with a project you’re working on and you wish to make a donation, you can do so by clicking the donate button that follows. Thank you for your generosity and support!
This package is a simple demo how to implement a jwt authentication in Neos Flow.
For more details about the JSON Web token itself check https://jwt.io/introduction/.
This mechanism is a great choice to sign for api requests in flow.
This package contains
JwtToken
This class represents a JWT token. This token contains the JWT string wich is sent in your request. The JWT string must be provided in a X-JWT Header.
The payload itself must contain a property accountIdentifier.
JwtTokenProvider
The JwtTokenProvider validates a JwtToken. It will first check if the token contains a jwt string at all and then try to decode it with a configured shared secret. If the payload can be decoded it will create a transient account with the data from the payload and set this account as authenticated.
Access data from the payload in flow
This demo implementation will set the full payload into the authenticated token. To access the data
in your flow application:
Eldarica is a model checker for Horn clauses, Numerical Transition
Systems, and software programs. Inputs can be read in a variety of
formats, including SMT-LIB 2 and Prolog for Horn clauses, and fragments of
Scala and C for software programs, and are analysed using a variant of the
Counterexample-Guided Abstraction
Refinement (CEGAR) method. Eldarica is fast and includes sophisticated
interpolation-based techniques for synthesising new predicates for
CEGAR, enabling it to solve a wide range of verification problems.
The Eldarica C parser accepts programs augmented with various primitives
from the timed automata world: supporting concurrency, clocks, communication
channels, as well as analysis of systems with an unbounded number of
processes (parameterised analysis).
You can either download a binary release of Eldarica, or compile the Scala
code yourself. Since Eldarica uses sbt, compilation is quite
simple: you just need sbt installed on your machine,
and then type sbt assembly to download the compiler, all
required libraries, and produce a binary of Eldarica.
After compilation (or downloading a binary release), calling Eldarica
is normally as easy as saying
A set of examples is provided on https://eldarica.org/eldarica, and included
in the distributions directory regression-tests.
You can use the script eld-client instead of eld in order to run Eldarica in a server-client mode,
which significantly speeds up processing of multiple problems.
A full list of options can be obtained by calling ./eld -h.
The options -disj, -abstract, -stac can be used to control
predicate generation. For the option -stac to work, it is currently necessary to have Yices (version 1) installed, as this is a dependency of the Flata library.
The option -sym can be used to switch to the symbolic execution engine of Eldarica, which will then be applied instead of CEGAR.
This is a blog application created with an MVC architecture.
Technologies
Laravel 8
Laravel Livewire
Laravel Jetstream
Laravel Permission
Laravel Collective
MySql Database
Blade Templates Frontend
Tailwind CSS
AdminLTE
About Laravel
Laravel is a web application framework with expressive, elegant syntax. We believe development must be an enjoyable and creative experience to be truly fulfilling. Laravel takes the pain out of development by easing common tasks used in many web projects, such as:
Laravel is accessible, powerful, and provides tools required for large, robust applications.
Learning Laravel
Laravel has the most extensive and thorough documentation and video tutorial library of all modern web application frameworks, making it a breeze to get started with the framework.
If you don’t feel like reading, Laracasts can help. Laracasts contains over 1500 video tutorials on a range of topics including Laravel, modern PHP, unit testing, and JavaScript. Boost your skills by digging into our comprehensive video library.
Laravel Sponsors
We would like to extend our thanks to the following sponsors for funding Laravel development. If you are interested in becoming a sponsor, please visit the Laravel Patreon page.
Thank you for considering contributing to the Laravel framework! The contribution guide can be found in the Laravel documentation.
Code of Conduct
In order to ensure that the Laravel community is welcoming to all, please review and abide by the Code of Conduct.
Security Vulnerabilities
If you discover a security vulnerability within Laravel, please send an e-mail to Taylor Otwell via taylor@laravel.com. All security vulnerabilities will be promptly addressed.
License
The Laravel framework is open-sourced software licensed under the MIT license.