Blog

  • vault-plugin-auth-cloudfoundry

    vault-plugin-auth-cloudfoundry

    This is a standalone backend authentication plugin for use with HashiCorp Vault.
    This plugin allows for applications running in Cloudfoundry using Instance Identity to authenticate with Vault.

    Background

    CF

    Cloud Foundry can be enabled to provide instance credentials to its apps (Enabling Instance Identity).
    Once enabled, Cloud Foundry injects into each app’s file system a unique x.509 certificate and RSA private key (referenced by $CF_INSTANCE_CERT and $CF_INSTANCE_KEY respectively).
    This key-pair is updated regularly and automatically by Cloud Foundry, with a relatively short certificate TTL (eg 24 hours).

    The public certificate contains fields identifying the app’s:

    • CF Org GUID
    • CF Space GUID
    • CF Instance GUID

    More information: Using Instance Identity Credentials.

    Vault

    Vault provides a mechanism to create custom authentication plugins. These plugins typically authenticate users through use of cryptography and an established form of trust.
    The plugin runs as a separate process on the same host as the main Vault process. Communication is via gRPC.

    More information: Vault: Building Plugin Backends.

    A common difficulty when using Vault is how to bootstrap applications with the required credentials needed to authenticate. This can introduce complexity and security concerns
    into CI/CD pipelines and/or other deployment mechanisms, since they then become responsible for distribution of credentials.

    Design

    A Vault authentication plugin that would allow Cloud Foundry apps to authenticate using the Cloudy Foundry Instance Identity system. Apps would not need to be bootstrapped with
    credentials. Instead, they would use the certificates provided by Cloud Foundry to authenticate themselves and gain access to Vault secrets.

    Auth Flow

    Authentication Flow

    1. A Vault admin enables the Cloud Foundry Vault authentication plugin, and configures it to trust the Cloud Foundry certificate authority. This is a one-time configuration step.

    2. The Cloud Foundy app generates a JWT token. The token contains the app’s public certificate using the x5c field. The token
      is signed using the app’s private key. The JWT token is signed using the app’s CF key pair.

    3. The app makes an authentication request to Vault. Included in the request is the JWT token created in the previous step.

    4. Vault recognizes the request as a Cloud Foundry authentication request and delegates to the plugin. The plugin processes the request and validates the following:

      • App’s certificate was signed by the configured CA
      • App’s certificate has not expired
      • App’s certificate has proper key usages set
      • App’s certificate contains the expected Cloud Foundry specific attributes (Org, Space, and Instance GUIDs)
      • The JWT token was signed using the app’s public/private key pair
    5. Once the plugin validates the provided JWT, it looks up the Vault policies to assign to the user. This can be done a number of ways.
      In either case, the plugin is able to perform the lookup using its knowledge about the request:

      • The app includes a role in its request. A Vault admin can then create a mapping of roles to policies. The roles can be scoped to Cloud Foundry Orgs, Spaces, or Instances
      • A Vault admin creates a mapping of Cloudy Foundry Orgs, Spaces, and Instances to Vault policies
    6. The plugin informs Vault that it approves or denies the token request. When approved, the plugin provides the Vault policies from the previous step and a sensible TTL, likely based on the certificate’s TTL.

    7. Vault then creates an appropriately scoped token and returns it to the app. The app now uses the token to read and write secrets.

    License

    Licensed under the Apache License, Version 2.0: http://www.apache.org/licenses/LICENSE-2.0

    Visit original content creator repository

  • opencage-api-client

    OpenCage API Client

    Version Downloads GitHub license Maintained

    This repository is an OpenCage Geocoding API client for Node Typescript and JavaScript.

    Continuous integration

    Node.js CI codecov

    Security

    Source Scores
    FOSSA FOSSA Status
    Snyk Known Vulnerabilities

    🎓 Tutorial

    You can find a comprehensive tutorial for using this module on the OpenCage site.

    🔧 Getting started

    Sign up for a free-trial API Key.

    NodeJS

    First install the library with npm or yarn:

    npm i --save opencage-api-client

    or

    yarn add opencage-api-client

    or

    pnpm add opencage-api-client

    Starting in v2, dotenv is no longer bundled as a dependency. While we still recommend using .env files for configuration, you’ll need to set up dotenv yourself in your project.

    Create a .env file with:

    OPENCAGE_API_KEY=YOUR-OPENCAGE_DATA_API_KEY

    Here are examples:

    1. CommonJS
    require('dotenv').config(); // or add `key` as an input parameter of the function geocode
    
    const opencage = require('opencage-api-client');
    
    opencage
      .geocode({ q: 'lyon' })
      .then((data) => {
        console.log(JSON.stringify(data));
      })
      .catch((error) => {
        console.log('error', error.message);
      });
    1. ESM
    import 'dotenv/config'; // or add `key` as an input parameter of the function geocode
    
    import opencage from 'opencage-api-client';
    
    opencage
      .geocode({ q: 'lyon' })
      .then((data) => {
        console.log(JSON.stringify(data));
      })
      .catch((error) => {
        console.log('error', error.message);
      });
    1. Typescript

    This example does not use dotenv and specify the API key as input parameter

    import { geocode } from 'opencage-api-client';
    import type { GeocodingRequest } from 'opencage-api-client';
    
    async function geocode() {
      const input: GeocodingRequest = {
        q: '51.952659,7.632473',
        // The API Key value from process.env.OPENCAGE_API_KEY is overridden by the one provided below
        key: '6d0e711d72d74daeb2b0bfd2a5cdfdba', // https://opencagedata.com/api#testingkeys
        no_annotations: 1,
      };
      const result = await geocode(input);
      console.log(JSON.stringify(result,null,2));
    }

    Browser

    The browser version is built using UMD notation. Modern browser can use the ESM version, here the example use the legacy JS notation.

    The library is available with unkpg CDN : https://unpkg.com/opencage-api-client

    1- include the library:

    <script src=”https://unpkg.com/opencage-api-client@1.1.0/dist/opencage-api.min.js”></script>’>
    <!-- specific version -->
    <script src="https://unpkg.com/opencage-api-client@1.1.0/dist/opencage-api.min.js"></script>

    2- use it

    opencage
      .geocode({ q: 'lyon', key: 'YOUR-API-KEY' })
      .then((data) => {
        console.log(JSON.stringify(data));
      })
      .catch((error) => {
        console.log('Error caught:', error.message);
      });

    3- others Examples

    You can find some examples in the examples folder.

    ✨ API

    geocode(input, options?)

    input: GeocodingRequest

    Parameter Type Optional? Description
    q String mandatory the query string to be geocoded: a place name, address or coordinates as lat,long
    key String optional the key can be omitted when using an options.proxyURL or when using a node runtime with a dedicated environment variable OPENCAGE_API_KEY
    optional Check the type definition and the API documentation for the other input parameters

    options?: additional optional options

    Parameter Type Optional? Description
    signal AbortSignal optional The AbortSignal allow to cancel the request
    proxyURL String optional The proxy URL parameter (useful to hide your API key)

    Error handling

    API can return errors like invalid key, too many requests, daily quota exceeded, etc. Those errors are thrown as Javascript Error by the geocode function. The error object contains the same status object as the OpenCage API.

    Assuming the catch statement uses error as variable name:

    console.log('Error caught:', error.message);

    will output for a 429:

    Error caught: Too Many Requests

    and

    console.log(JSON.stringify(error, null, 2));

    will output for a 429:

    {
      "status": {
        "code": 429,
        "message": "Too Many Requests"
      }
    }

    Check the examples using the Test API key from OpenCage error handling examples

    🔨 Build and test

    1. Fork or clone this repository
    2. cd into the repository folder
    3. pnpm install to install all the required dependencies from npm
    4. echo "OPENCAGE_API_KEY=YOUR-OPENCAGE_DATA_API_KEY" >.env to allow integration tests with your API key
    5. lint and test coverage using pnpm run test:coverage
    6. Build : pnpm run build
    7. Test with the examples running ./scripts/run-examples.sh

    🛣 Revision History

    Check the CHANGELOG file.

    🥂 Contributing

    Anyone and everyone is welcome to contribute.

    🐞 Issues

    Find a bug or want to request a new feature? Please let me know by submitting an issue.

    🗝 Licensing

    Licensed under the MIT License

    A copy of the license is available in the repository’s LICENSE file.

    FOSSA Status

    Visit original content creator repository
  • assert-util-type

    assert-util-types

    npm version License: MIT NPM downloads assert util types release
    TypeScript verifies that your program uses the right type as you write code, avoiding potential issues at runtime. but, By using any, you expose yourself to issues that are difficult to trace and debug, especially once the code is deployed in production.

    when we cannot determine the type because we don’t know the result of that library and fetched data. we need to use any type for them.

    So, if you dont want to use any type. you can use assert-util-types.

    ⚙️ Installation

    case: use npm

    $ npm install assert-util-types

    case: use yarn

    $ yarn add assert-util-types

    case: use pnpm

    $ pnpm install assert-util-types

    📝 Usage

    Nominal

    nominal types is preventing confusion between two types. In regular Typescript you run into this problem:

    type User = {
        id: number
        name: string
    }
    
    type Admin = {
        id: number
        name: string
    }
    
    const mike: Admin = {
        id: 1,
        name: 'mike'
    }
    const introduceMe = (props: User):string => {
        const {id, name} = props;
        return `No.${id}, User is ${name} !`
    }
    // oh... compilation was successful.
    introduceMe(mike); // we can use User and Admin in the same way.

    but Nominal type solve this problem

    import { Nominal } from 'assert-util-types';
    
    type userId = Nominal<number, 'userId'>
    type adminId = Nominal<number, 'adminId'>
    
    type User = {
        id: userId
        name: string
    }
    
    type Admin = {
        id: adminId
        name: string
    }
    
    const mike = {
        id: 1,
        name: 'mike'
    } as Admin
    
    
    const introduceMe = (props: User):string => {
        const {id, name} = props;
        return `No.${id}, User is ${name} !`
    }
    
    // That's great! get an error! 
    introduceMe(mike); // Argument of type 'Admin' is not assignable to parameter of type 'User'.

    user defined type guard

    asSomething functions to create the more complex type checks. this functions passed the value in a value field upon success, or provides detailed error messages upon failure.

    isString

    import { isString } from 'assert-util-types';
    
    isString('hello');  // true
    isString(1);    // false

    assertString

    import { assertString } from 'assert-util-types';
    
    assertString('hello', 'target'); // ok
    assertString(1, 'target'); // throw error

    asString

    import { asString } from 'assert-util-types';
    
    asString('hello', 'target'); // 'hello'
    asString(1, 'target'); // error message is "target should be string"

    isFilledString

    import { isFilledString } from 'assert-util-types';
    
    isFilledString('hello'); // true 
    isFilledString(''); // false 
    isFilledString(1); // false

    assertFilledString

    import { asFilledString } from 'assert-util-types';
    
    asFilledString('hello', 'target'); // ok 
    asFilledString('', 'empty string'); // error message is "empty string should have least 1 character" 
    asFilledString(1, 'target'); // error message is "target should have least 1 character"

    asFilledString

    import { asFilledString } from 'assert-util-types';
    
    asFilledString('hello', 'target'); // 'hello'
    
    asFilledString('', 'empty string'); // error message is "empty string should have least 1 character" 
    asFilledString(1, 'target'); // error message is "target should have least 1 character"

    isNumber

    import { isNumber } from 'assert-util-types';
    
    isNumber(1); // true
    isNumber(NaN) // false
    isNumber('1'); // false

    assertNumber

    import { assertNumber } from 'assert-util-types';
    
    assertNumber(1, 'target'); // ok
    assertNumber(NaN, 'NaN'); // error message is "NaN should be number"

    asNumber

    import { asNumber } from 'assert-util-types';
    
    asNumber(1, 'target'); // 1
    asNumber(true, 'target'); // 1 
    asNumber('hello', 'NaN'); // TypeError: Cannot convert hello to number

    isFilledArray

    import { isFilledArray } from 'assert-util-types';
    
    isFilledArray(['string', 'number']); // true
    isFilledArray([]); // false
    isFilledArray(1); // false

    assertFilledArray

    import { assertFilledArray } from 'assert-util-types';
    
    assertFilledArray(['string', 'number'], 'target'); // ok
    assertFilledArray([], 'empty array'); // error message is "empty array should have least 1 item"

    isObject

    import { isObject } from 'assert-util-types';
    
    isObject({}); // true
    isObject([]); // false
    isObject(1); // false

    assertObject

    import { assertObject } from 'assert-util-types';
    
    assertObject({}); // ok
    assertObject([], 'array'); // error message is "array should be object"

    assertMatchedType

    import { assertMatchedType } from 'assert-util-types';
    
    type User = {
      id?: any;
      name?: string;
      email: string;
    };
    
    const obj: unknown = { id: 1, name: "foo" };
    
    assertMatchedType<User>(obj, ["email"]); // throws error

    Licence

    MIT

    Visit original content creator repository
  • kubevirtbmc

    KubeVirtBMC

    main build and publish workflow release

    KubeVirtBMC unleashes the out-of-band management for virtual machines on Kubernetes in a traditional way, i.e., IPMI and Redfish. This allows users to power on/off/reset and set the boot device for virtual machines. It was initially designed for Tinkerbell/Seeder to provision KubeVirt virtual machines, but as long as your provisioning tools play nicely with IPMI/Redfish, you can use KubeVirtBMC to manage your virtual machines on Kubernetes clusters.

    The project was born in SUSE Hack Week 23 and augmented with Redfish in SUSE Hack Week 24. The Redfish virtual media service has been supported after Hack Week 25.

    Quick Start

    Install cert-manager first as it is required for the webhook service and the Redfish API:

    helm upgrade --install cert-manager cert-manager \
        --repo=https://charts.jetstack.io \
        --namespace=cert-manager \
        --create-namespace \
        --version=v1.19.2 \
        --set=crds.enabled=true

    Install KubeVirtBMC with Helm. Optionally, you can specify the image repository and tag, e.g., --set image.repository=starbops/virtbmc-controller --set image.tag=v0.4.1:

    # Install the chart from the remote repository
    helm repo add kubevirtbmc https://charts.zespre.com/
    helm repo update
    helm upgrade --install kubevirtbmc kubevirtbmc/kubevirtbmc \
        --namespace=kubevirtbmc-system \
        --create-namespace
    
    # Or, install the chart locally, with the bleeding-edge image, i.e., `starbops/virtbmc-controller:main-head`
    git clone https://github.com/starbops/kubevirtbmc.git
    cd kubevirtbmc/
    helm upgrade --install kubevirtbmc ./deploy/charts/kubevirtbmc \
        --namespace=kubevirtbmc-system \
        --create-namespace \
        --set=image.tag=main-head

    Project Description

    KubeVirtBMC was inspired by VirtualBMC. The difference between them could be illustrated as below:

    flowchart LR
        client1[Client]
        client2[Client]
        BMC1[BMC]
        VM[VM]
        subgraph KubeVirtBMC
        direction LR
        client2-->|IPMI/Redfish|virtBMC-->|K8s API|VM
        end
        subgraph VirtualBMC
        direction LR
        client1-->|IPMI|vBMC-->|libvirt API|BMC1
        end
    
    Loading

    Goals

    • Providing a selective set of BMC functionalities for virtual machines powered by KubeVirt
    • Providing accessibility through the network to the virtual BMCs of the virtual machines

    Non-goals

    • Providing BMC functionalities for bare-metal machines
    • Providing BMC accessibility outside of the cluster via LoadBalancer or NodePort type of Services

    KubeVirtBMC consists of two components:

    • virtbmc-controller: A Kubernetes controller manager built with kubebuilder that reconciles on the VirtualMachineBMC and other relevant resources
    • virtbmc: A BMC emulator for serving IPMI/Redfish requests and translating them to native Kubernetes API requests to get information from the virtual machine or even take actions on it

    Below is the workflow of KubeVirtBMC when a VirtualMachine was created and booted up:

    flowchart LR
        controller["virtbmc-controller"]
        cr["virtualmachinebmc CR"]
        virtbmc-pod["virtbmc Pod"]
        virtbmc-svc["virtbmc Service"]
        controller-.->|watches|cr
        cr-.->|owns|virtbmc-svc
        cr-.->|owns|virtbmc-pod
        client--->|IPMI/Redfish|virtbmc-svc
        virtbmc-svc-->virtbmc-pod
        virtbmc-pod-->|HTTP|apiserver
        apiserver-->|modifies|vm
        vm-->|creates|vmi
    
    Loading

    Take a peek at the VirtualMachineBMC CRD (Custom Resource Definition):

    // Condition type constant.
    const (
    	ConditionReady = "Ready"
    )
    
    // VirtualMachineBMCSpec defines the desired state of VirtualMachineBMC.
    type VirtualMachineBMCSpec struct {
    	// Reference to the VM to manage.
    	VirtualMachineRef *corev1.LocalObjectReference `json:"virtualMachineRef,omitempty"`
    
    	// Reference to the Secret containing IPMI/Redfish credentials.
    	AuthSecretRef *corev1.LocalObjectReference `json:"authSecretRef,omitempty"`
    }
    
    // VirtualMachineBMCStatus defines the observed state of VirtualMachineBMC.
    type VirtualMachineBMCStatus struct {
    	// IP address exposed by the BMC service
    	ClusterIP string `json:"clusterIP,omitempty"`
    
    	// List of current conditions (e.g., Ready)
    	Conditions []metav1.Condition `json:"conditions,omitempty"`
    }

    Getting Started

    Prerequisites

    • Go version v1.24.0+
    • Docker version 28.5+.
    • kubectl version v1.32.0+.
    • Access to a Kubernetes v1.32.0+ cluster.
    • KubeVirt v1.6.0+.

    Develop

    Build and push the images:

    export PUSH=true
    make docker-build
    
    # For building multi-arch images
    make docker-buildx

    NOTE: These images ought to be published in the personal registry you specified. And it is required to have access to pull the images from the working environment. Make sure you have the proper permission to the registry if the above commands don’t work.

    Install the CRDs into the cluster:

    make install

    Run the controller locally

    export ENABLE_WEBHOOKS=false
    make run

    Generate the Redfish API and server stubs

    [!NOTE] This section is only necessary if you want to change the Redfish schema version.

    Download the Redfish schema from the DMTF website:

    make download-redfish-schema

    Normally, the OpenAPI spec file hack/<REDFISH_SCHEMA_BUNDLE>/openapi/openapi.yaml is the one you need. Copy it and modify it, make sure the changes are reflected in the file hack/redfish/spec/openapi.yaml. Then generate the code with openapi-generator:

    make generate-redfish-api

    The generated code will be placed in the pkg/generated/redfish directory.

    [!NOTE] You might also need to adjust the adapter and handler code because they are coupled with the Redfish schema at some degree.

    Deploy

    Deploy cert-manager

    kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.19.2/cert-manager.yaml

    Deploy the Manager to the cluster with the image specified by IMG:

    # Use the latest image at main-head
    make deploy
    
    # Or checkout to a specific branch/tag
    git checkout <branch/tag>
    make deploy
    
    # Or specify the custom-built image
    make deploy IMG=<some-registry>/virtbmc-controller:<tag>

    [!NOTE] If you encounter RBAC errors, you may need to grant yourself cluster-admin privileges or be logged in as admin.

    Interact with Virtual BMCs

    Set up the virtual BMC

    In order to have a functioning virtual BMC, the virtual machine and the BMC credentials refered by the to-be-created VirtualMachineBMC resource need to be created first. Here is an example of them:

    ---
    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: test-vm
      namespace: default
    spec:
      runStrategy: Halted
      template:
        metadata:
          labels:
            kubevirt.io/domain: test-vm
        spec:
          domain:
            cpu:
              cores: 2
            devices:
              disks:
              - cdrom:
                  bus: sata
                name: cdrom
              interfaces:
              - name: default
                masquerade: {}
            features:
              acpi:
                enabled: true
            firmware:
              bootloader:
                efi:
                  secureBoot: false
            machine:
              type: q35
            memory:
              guest: 4Gi
            resources:
              limits:
                cpu: "2"
                memory: 4Gi
          hostname: test-vm
          networks:
          - name: default
            pod: {}
          evictionStrategy: LiveMigrateIfPossible
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: vmbmc-auth-secret
      namespace: default
    data:
      password: cGFzc3dvcmQ= # password
      username: YWRtaW4=     # admin

    After creating the above resources, it’s time for the VirtualMachineBMC resource:

    cat <<EOF | kubectl apply -f -
    apiVersion: bmc.kubevirt.io/v1beta1
    kind: VirtualMachineBMC
    metadata:
      name: test-vmbmc
      namespace: default
    spec:
      virtualMachineRef:
        name: test-vm
      authSecretRef:
        name: vmbmc-auth-secret
    EOF

    You can check the just-created VirtualMachineBMC resource and see whether it’s ready to serve:

    $ kubectl get virtualmachinebmcs test-vmbmc -o yaml
    apiVersion: bmc.kubevirt.io/v1beta1
    kind: VirtualMachineBMC
    metadata:
      annotations:
        kubectl.kubernetes.io/last-applied-configuration: |
          {"apiVersion":"bmc.kubevirt.io/v1beta1","kind":"VirtualMachineBMC","metadata":{"annotations":{},"name":"test-vmbmc","namespace":"default"},"spec":{"authSecretRef":{"name":"vmbmc-auth-secret"},"virtualMachineRef":{"name":"test-vm"}}}
      creationTimestamp: "2025-12-10T05:44:54Z"
      generation: 1
      name: test-vmbmc
      namespace: default
      resourceVersion: "670418"
      uid: 1446bab2-0186-465e-ba02-ef5d5ed22df2
    spec:
      authSecretRef:
        name: vmbmc-auth-secret
      virtualMachineRef:
        name: test-vm
    status:
      clusterIP: 10.53.220.67
      conditions:
      - lastTransitionTime: "2025-12-10T05:44:54Z"
        message: ClusterIP assigned to the Service
        reason: ServiceReady
        status: "True"
        type: Ready

    Behind the scenes, KubeVirtBMC automatically creates the dedicated Pod and Service to provide the virtual BMC functionality for the virtual machine you specified. You can verify it by running the get command with the label:

    $ kubectl get pods,services -l kubevirt.io/virtualmachinebmc-name=test-vmbmc
    NAME                  READY   STATUS    RESTARTS   AGE
    pod/test-vm-virtbmc   1/1     Running   0          37s
    
    NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
    service/test-vm-virtbmc   ClusterIP   10.53.220.67   <none>        623/UDP,80/TCP   37s

    Access virtual BMC via IPMI

    To access the virtual BMC via IPMI, you need to be in the cluster network. Run a Pod that comes with ipmitool built in:

    kubectl run -it --rm ipmitool --image=mikeynap/ipmitool --command -- /bin/sh

    Inside the Pod, you can for example turn on the virtual machine via ipmitool:

    # Get the power status of the virtual machine
    $ ipmitool -I lan -U admin -P password -H test-vm-virtbmc.default.svc.cluster.local power status
    Chassis Power is off
    
    # Turn on the virtual machine
    $ ipmitool -I lan -U admin -P password -H test-vm-virtbmc.default.svc.cluster.local power on
    Chassis Power Control: Up/On
    
    # Wait a few seconds and then get the power status again
    $ ipmitool -I lan -U admin -P password -H test-vm-virtbmc.default.svc.cluster.local power status
    Chassis Power is on

    Access virtual BMC via Redfish

    To access the virtual BMC through the Redfish API, you can use curl:

    kubectl run -it --rm curl-redfish --image=curlimages/curl --command -- /bin/sh

    Inside the Pod, you can operate the virtual machine via Redfish APIs:

    # Get the Redfish ServiceRoot
    $ curl -L http://test-vm-virtbmc.default.svc/redfish/v1
    {"@odata.context":"/redfish/v1/$metadata#ServiceRoot.ServiceRoot","@odata.id":"/redfish/v1","@odata.type":"#ServiceRoot.v1_16_1.ServiceRoot","AccountService":{"@odata.id":"/redfish/v1/AccountService"},"AggregationService":{},"Cables":{},"CertificateService":{},"Chassis":{"@odata.id":"/redfish/v1/Chassis"},"ComponentIntegrity":{},"CompositionService":{"@odata.id":"/redfish/v1/CompositionService"},"Description":"ServiceRoot","EventService":{"@odata.id":"/redfish/v1/EventService"},"Fabrics":{},"Facilities":{},"Id":"","JobService":{},"JsonSchemas":{},"KeyService":{},"LicenseService":{},"Links":{"ManagerProvidingService":{"@odata.id":"/redfish/v1/Managers/BMC"},"Sessions":{"@odata.id":"/redfish/v1/SessionService/Sessions"}},"Managers":{"@odata.id":"/redfish/v1/Managers"},"NVMeDomains":{},"Name":"ServiceRoot","PowerEquipment":{},"ProtocolFeaturesSupported":{"DeepOperations":{},"ExpandQuery":{}},"RedfishVersion":"1.16.1","RegisteredClients":{},"Registries":{"@odata.id":"/redfish/v1/Registries"},"ResourceBlocks":{},"ServiceConditions":{},"SessionService":{"@odata.id":"/redfish/v1/SessionService"},"Storage":{},"StorageServices":{},"StorageSystems":{},"Systems":{"@odata.id":"/redfish/v1/Systems"},"Tasks":{"@odata.id":"/redfish/v1/Tasks"},"TelemetryService":{"@odata.id":"/redfish/v1/TelemetryService"},"ThermalEquipment":{},"UUID":"00000000-0000-0000-0000-000000000000","UpdateService":{"@odata.id":"/redfish/v1/UpdateService"}}
    
    # Log in by creating a session
    $ curl -i -X POST -H "Content-Type: application/json" http://test-vm-virtbmc.default.svc/redfish/v1/SessionService/Sessions -d '{"UserName":"admin","Password":"password"}'
    HTTP/1.1 201 Created
    Content-Type: application/json; charset=UTF-8
    Location: /redfish/v1/SessionService/Sessions/337bf6b2-e4c7-41c8-bfe4-fe3ee3ce40f2
    X-Auth-Token: 55f88d07289cf1207b7b967f1823f5b28e08c8977f6c742f8175274afb214c93
    Date: Wed, 18 Dec 2024 15:27:04 GMT
    Content-Length: 225
    
    {"@odata.id":"/redfish/v1/SessionService/Sessions/1","@odata.type":"Session.v1_7_1.Session","Actions":{},"Id":"337bf6b2-e4c7-41c8-bfe4-fe3ee3ce40f2","Links":{"OutboundConnection":{}},"Name":"User Session","UserName":"admin"}
    
    # Get the System resource
    $ curl -H "X-Auth-Token: 55f88d07289cf1207b7b967f1823f5b28e08c8977f6c742f8175274afb214c93" http://test-vm-virtbmc.default.svc/redfish/v1/Systems/1
    {"@odata.context":"/redfish/v1/$metadata#ComputerSystem.ComputerSystem","@odata.id":"/redfish/v1/Systems/1","@odata.type":"#ComputerSystem.v1_22_0.ComputerSystem","Actions":{"#ComputerSystem.AddResourceBlock":{},"#ComputerSystem.Decommission":{},"#ComputerSystem.RemoveResourceBlock":{},"#ComputerSystem.Reset":{"target":"/redfish/v1/Systems/1/Actions/ComputerSystem.Reset","title":"Reset"},"#ComputerSystem.SetDefaultBootOrder":{}},"AssetTag":"","Bios":{},"Boot":{"BootOptions":{},"BootSourceOverrideEnabled":"Disabled","BootSourceOverrideMode":"Legacy","BootSourceOverrideTarget":"Hdd","Certificates":{}},"BootProgress":{},"Certificates":{},"Composition":{},"Description":"Computer System","EthernetInterfaces":{},"FabricAdapters":{},"GraphicalConsole":{},"GraphicsControllers":{},"HostWatchdogTimer":{"FunctionEnabled":false,"Status":{},"TimeoutAction":""},"HostedServices":{"StorageServices":{}},"Id":"1","IdlePowerSaver":{},"IndicatorLED":"Unknown","KeyManagement":{"KMIPCertificates":{}},"LastResetTime":"0001-01-01T00:00:00Z","Links":{"HostingComputerSystem":{}},"LogServices":{},"Manufacturer":"KubeVirt","Memory":{},"MemoryDomains":{},"MemorySummary":{"Metrics":{},"Status":{},"TotalSystemMemoryGiB":0},"Model":"KubeVirt","Name":"default/test-vm","NetworkInterfaces":{"@odata.id":"/redfish/v1/Systems/1/NetworkInterfaces"},"OperatingSystem":"/redfish/v1/Systems/1/OperatingSystem","PartNumber":"","PowerState":"Off","ProcessorSummary":{"Count":0,"Metrics":{},"Status":{}},"Processors":{},"SKU":"","SecureBoot":{},"SerialConsole":{"IPMI":{},"SSH":{},"Telnet":{}},"SerialNumber":"000000000000","SimpleStorage":{"@odata.id":"/redfish/v1/Systems/1/SimpleStorage"},"Status":{},"Storage":{"@odata.id":"/redfish/v1/Systems/1/Storage"},"SystemType":"Virtual","USBControllers":{},"UUID":"00000000-0000-0000-0000-000000000000","VirtualMedia":{"@odata.id":"/redfish/v1/Systems/1/VirtualMedia"},"VirtualMediaConfig":{}}
    
    # Set the boot device to PXE
    $ curl -i -X PATCH -H "Content-Type: application/json" -H "X-Auth-Token: 55f88d07289cf1207b7b967f1823f5b28e08c8977f6c742f8175274afb214c93" http://test-vm-virtbmc.default.svc/redfish/v1/Systems/1 -d '{"Boot":{"BootSourceOverrideTarget":"Pxe","BootSourceOverrideEnabled":"Continuous"}}'
    HTTP/1.1 204 No Content
    Content-Type: application/json; charset=UTF-8
    Date: Wed, 18 Dec 2024 15:54:09 GMT
    
    # Start the virtual machine
    $ curl -i -X POST -H "Content-Type: application/json" -H "X-Auth-Token: 55f88d07289cf1207b7b967f1823f5b28e08c8977f6c742f8175274afb214c93" http://test-vm-virtbmc.default.svc/redfish/v1/Systems/1/Actions/ComputerSystem.Reset -d '{"ResetType":"On"}'
    HTTP/1.1 204 No Content
    Content-Type: application/json; charset=UTF-8
    Date: Wed, 18 Dec 2024 15:59:25 GMT
    
    # Reboot the virtual machine
    $ curl -i -X POST -H "Content-Type: application/json" -H "X-Auth-Token: 55f88d07289cf1207b7b967f1823f5b28e08c8977f6c742f8175274afb214c93" http://test-vm-virtbmc.default.svc/redfish/v1/Systems/1/Actions/ComputerSystem.Reset -d '{"ResetType":"ForceRestart"}'
    HTTP/1.1 204 No Content
    Content-Type: application/json; charset=UTF-8
    Date: Wed, 18 Dec 2024 16:02:49 GMT
    
    # Stop the virtual machine
    $ curl -i -X POST -H "Content-Type: application/json" -H "X-Auth-Token: 55f88d07289cf1207b7b967f1823f5b28e08c8977f6c742f8175274afb214c93" http://test-vm-virtbmc.default.svc/redfish/v1/Systems/1/Actions/ComputerSystem.Reset -d '{"ResetType":"GracefulShutdown"}'
    HTTP/1.1 204 No Content
    Content-Type: application/json; charset=UTF-8
    Date: Wed, 18 Dec 2024 16:05:30 GMT
    
    # Log out by deleting the session
    $ curl -i -X DELETE -H "X-Auth-Token: 55f88d07289cf1207b7b967f1823f5b28e08c8977f6c742f8175274afb214c93" http://test-vm-virtbmc.default.svc/redfish/v1/SessionService/Sessions/337bf6b2-e4c7-41c8-bfe4-fe3ee3ce40f2
    HTTP/1.1 204 No Content
    Content-Type: application/json; charset=UTF-8
    Date: Wed, 18 Dec 2024 16:06:12 GMT

    You can even attach/detach an ISO image to the virtual machine with the Redfish virtual media function:

    # Insert virtual media to the virtual machine
    $ curl -i -X POST -H "Content-Type: application/json" -H "X-Auth-Token: 55f88d07289cf1207b7b967f1823f5b28e08c8977f6c742f8175274afb214c93" http://test-vm-virtbmc.default.svc/redfish/v1/Managers/BMC/VirtualMedia/CD1/Actions/VirtualMedia.InsertMedia -d '{"Image": "https://releases.ubuntu.com/noble/ubuntu-24.04.3-live-server-amd64.iso", "Inserted": true}'
    
    # Get virtual media status
    $ curl -i -H "X-Auth-Token: 55f88d07289cf1207b7b967f1823f5b28e08c8977f6c742f8175274afb214c93" http://test-vm-virtbmc.default.svc/redfish/v1/Managers/BMC/VirtualMedia/CD1
    {"@odata.context":"/redfish/v1/$metadata#VirtualMedia.VirtualMedia","@odata.id":"/redfish/v1/Managers/BMC/VirtualMedia/CD1","@odata.type":"#VirtualMedia.v1_6_3.VirtualMedia","Actions":{"#VirtualMedia.EjectMedia":{},"#VirtualMedia.InsertMedia":{}},"Certificates":{},"ClientCertificates":{},"ConnectedVia":"URI","Description":"Virtual Media","Id":"CD1","Image":"https://releases.ubuntu.com/noble/ubuntu-24.04.3-live-server-amd64.iso","ImageName":"","Inserted":true,"MediaTypes":["CD","DVD"],"Name":"Virtual Media","Status":{},"WriteProtected":false}
    
    # Eject virtual media from the virtual machine
    $ curl -i -X POST -H "Content-Type: application/json" -H "X-Auth-Token: 55f88d07289cf1207b7b967f1823f5b28e08c8977f6c742f8175274afb214c93" http://test-vm-virtbmc.default.svc/redfish/v1/Managers/BMC/VirtualMedia/CD1/Actions/VirtualMedia.EjectMedia -d '{}'

    Under the hood, KubeVirtBMC’s Redfish virtual media function is backed by KubeVirt’s DeclarativeHotplugVolumes feature and CDI DataVolume. As a result, you need to enable the feature gate and have CDI installed in the cluster as prerequisites. For each virtual machine that you want to use the virtual media function, its VirtualMachine resource must have a CD-ROM disk defined as a stub for volume hotplug. For instance:

            ...
            devices:
              disks:
              - cdrom:       # The cdrom stub must exist before using the virtual media function
                  bus: sata
                name: cdrom  # The name of the CD-ROM disk can be any
            ...

    Expose the Redfish API to external

    Due to the nature of the Redfish API, you can expose the Redfish service to the outside of the cluster with the aid of Ingress controllers. What’s more, you can use cert-manager to issue a certificate for the Redfish service.

    Here, we will use the self-signed issuer type as an example (please note the security implications; for more details, see https://cert-manager.io/docs/configuration/selfsigned/). To do so, you can to create an Issuer resource in the namespace same as the VirtualMachineBMC resource:

    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: kubevirtbmc-selfsigned-issuer
      namespace: default
    spec:
      selfSigned: {}

    Next, create an Ingress resource (assuming you have an Ingress controller, e.g., nginx-ingress, installed) for each VirtualMachineBMC resource you want to expose:

    cat <<EOF | kubectl apply -f -
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      annotations:
        cert-manager.io/issuer: "kubevirtbmc-selfsigned-issuer"
      name: test-vm-virtbmc
      namespace: default
    spec:
      ingressClassName: nginx
      tls:
      - hosts:
        - test-vm-virtbmc.default.<ingress-nginx-lb-svc-ip>.sslip.io
        secretName: test-vm-virtbmc-tls
      rules:
      - host: test-vm-virtbmc.default.<ingress-nginx-lb-svc-ip>.sslip.io
        http:
          paths:
          - backend:
              service:
                name: test-vm-virtbmc
                port:
                  number: 80
            path: /
            pathType: Prefix
    EOF

    In the end, you can access the Redfish service via https://test-vm-virtbmc.default.<ingress-nginx-lb-svc-ip>.sslip.io/redfish/v1 from anywhere.

    To Uninstall

    Delete the instances (CRs) from the cluster:

    kubectl delete kubevirtbmcs test-vmbmc

    Delete the APIs (CRDs) from the cluster:

    make uninstall

    UnDeploy the controller from the cluster:

    make undeploy

    License

    Copyright 2024 Zespre Chang starbops@hey.com

    Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0
    

    Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

    Visit original content creator repository
  • full-stack-nginx-drupal-for-everyone-with-docker-compose

    If You want to build a website with Drupal at short time;

    Full stack Nginx Drupal:

    Drupal     docker compose     mariadb     mysql     nginx     php     redis     varnish     Bash     phpmyadmin     certbot     letsencrypt     portainer     backup

    Plus, manage docker containers with Portainer.

    Supported CPU architectures:

    arm64/aarch64, x86-64

    Supported Linux Package Manage Systems:

    apk, dnf, yum, apt/apt-get, zypper, pacman

    Supported Linux Operation Systems:

    alpine linux     fedora     centos     debian     ubuntu     ubuntu     redhat on s390x (IBM Z)     opensuse on s390x (IBM Z)     arch linux

    Note: Fedora 37, 39 and alpine linux x86-64 compatible, could not try sles IBM Z s390x, rhel IBM Z s390x and raspberrypi.

    With this project you can quickly run the following:

    For certbot (letsencrypt) certificate:

    IPv4/IPv6 Firewall

    Create rules to open ports to the internet, or to a specific IPv4 address or range.

    • http: 80
    • https: 443
    • portainer: 9001
    • phpmyadmin: 9090

    Contents:

    Automatic

    Exec install shell script for auto installation and configuration

    download with

    git clone https://github.com/damalis/full-stack-nginx-drupal-for-everyone-with-docker-compose.git
    

    Open a terminal and cd to the folder in which docker-compose.yml is saved and run:

    cd full-stack-nginx-drupal-for-everyone-with-docker-compose
    chmod +x install.sh
    ./install.sh
    

    Manual

    Requirements

    Make sure you have the latest versions of Docker and Docker Compose installed on your machine.

    Clone this repository or copy the files from this repository into a new folder.

    Make sure to add your user to the docker group.

    Configuration

    download with

    git clone https://github.com/damalis/full-stack-nginx-drupal-for-everyone-with-docker-compose.git
    

    Open a terminal and cd to the folder in which docker-compose.yml is saved and run:

    cd full-stack-nginx-drupal-for-everyone-with-docker-compose
    

    Copy the example environment into .env

    cp env.example .env
    

    Edit the .env file to change values of

    |LOCAL_TIMEZONE|DOMAIN_NAME|DIRECTORY_PATH|LETSENCRYPT_EMAIL| |DB_USER|DB_PASSWORD|DB_NAME|MYSQL_ROOT_PASSWORD|DATABASE_IMAGE_NAME| |DATABASE_CONT_NAME|DATABASE_PACKAGE_MANAGER|DATABASE_ADMIN_COMMANDLINE|PMA_CONTROLUSER|PMA_CONTROLPASS| |PMA_HTPASSWD_USERNAME|PMA_HTPASSWD_PASSWORD|VARNISH_VERSION|SSL_SNIPPET|

    Variable Value
    LOCAL_TIMEZONE to see local timezones
    DIRECTORY_PATH pwd at command line
    DATABASE_IMAGE_NAME mariadb or mysql
    DATABASE_CONT_NAME mariadb, mysql or custom name
    DATABASE_PACKAGE_MANAGER mariadb apt-get update && apt-get install -y gettext-base
    mysql microdnf install -y gettext
    DATABASE_ADMIN_COMMANDLINE mariadb mariadb-admin
    mysql mysqladmin
    VARNISH_VERSION centos version 9+ and fedora latest
    the others stable
    SSL_SNIPPET localhost echo 'Generated Self-signed SSL Certificate at localhost'
    remotehost certbot certonly --webroot --webroot-path /tmp/acme-challenge --rsa-key-size 4096 --non-interactive --agree-tos --no-eff-email --force-renewal --email ${LETSENCRYPT_EMAIL} -d ${DOMAIN_NAME} -d www.${DOMAIN_NAME}

    and

    cp ./phpmyadmin/apache2/sites-available/default-ssl.sample.conf ./phpmyadmin/apache2/sites-available/default-ssl.conf
    

    change example.com to your domain name in ./phpmyadmin/apache2/sites-available/default-ssl.conf file.

    cp ./database/phpmyadmin/sql/create_tables.sql.template.example ./database/phpmyadmin/sql/create_tables.sql.template
    

    change pma_controluser and db_authentication_password in ./database/phpmyadmin/sql/create_tables.sql.template file.

    Installation

    Firstly: will create external volume

    docker volume create --driver local --opt type=none --opt device=${PWD}/certbot --opt o=bind certbot-etc
    

    Localhost ssl: Generate Self-signed SSL Certificate with guide mkcert repository.

    docker compose up -d
    

    then reloading for webserver ssl configuration

    docker container restart webserver
    

    The containers are now built and running. You should be able to access the Drupal installation with the configured IP in the browser address. https://example.com.

    For convenience you may add a new entry into your hosts file.

    Portainer

    docker compose -f portainer-docker-compose.yml -p portainer up -d 
    

    manage docker with Portainer is the definitive container management tool for Docker, Docker Swarm with it’s highly intuitive GUI and API.

    You can also visit https://example.com:9001 to access portainer after starting the containers.

    Usage

    You could manage docker containers without command line with portainer.

    Here’s a quick reference of commonly used Docker Compose commands

    docker ps -a	# Lists all containers managed by the compose file
    
    docker compose start	# Starts previously stopped containers
    
    docker compose stop	# Stops all running containers
    
    docker compose down	# Stops and removes containers, networks, etc.
    
    docker compose down -v # Add --volumes to remove volumes explicitly
    
    docker rm -f $(docker ps -a -q)	# Removes portainer and the other containers
    
    docker volume rm $(docker volume ls -q)	# Removes all volumes
    
    docker network prune	# Remove all unused networks
    
    docker system prune	# Removes unused data (containers, networks, images, and optionally volumes)
    
    docker system prune -a	# Removes all unused images, not just dangling ones
    
    docker rmi $(docker image ls -q)	# Removes portainer and the other images
    
    docker container logs container_name_or_id	# Shows logs from all services
    

    Project from existing source

    Copy all files into a new directory:

    docker compose up -d	# Starts services in detached mode (in the background)
    

    Docker run reference

    https://docs.docker.com/reference/cli/docker/compose/

    Website

    You should see the “Drupal installation” page in your browser. If not, please check if your PHP installation satisfies Drupal’s requirements.

    https://example.com
    

    if you should see the “The website encountered an unexpected error. Please try again later.” in your browser, run drush cache:rebuild in drupal container.

    add or remove code in the ./php-fpm/php/conf.d/security.ini file for custom php.ini configurations

    https://www.php.net/manual/en/configuration.file.php

    You should make changes custom host configurations ./php-fpm/php-fpm.d/z-www.conf then must restart service, FPM uses php.ini syntax for its configuration file – php-fpm.conf, and pool configuration files.

    https://www.php.net/manual/en/install.fpm.configuration.php

    docker container restart drupal
    

    add and/or remove drupal site folders and files with any ftp client program in ./drupal folder.
    You can also visit https://example.com to access website after starting the containers.

    Webserver

    add or remove code in the ./webserver/templates/nginx.conf.template file for custom nginx configurations

    https://docs.nginx.com/nginx/admin-guide/basic-functionality/managing-configuration-files/

    Database

    ADVANCED OPTIONS -> |Host: database|Username: root|Password: root password|

    https://mariadb.com/kb/en/configuring-mariadb-with-option-files/

    https://dev.mysql.com/doc/

    Redis

    at page https://example.com/en/admin/modules, filter: redis and check then install.

    if there isn’t these lines, Edit Drupal settings file: ./drupal/sites/default/settings.php and add these lines at the bottom of the file:

    $settings['redis.connection']['interface'] = 'PhpRedis';
    // Host ip address.
    $settings['redis.connection']['host'] = 'redis';
    											 
    $settings['cache']['default'] = 'cache.backend.redis';
    // Redis port.
    $settings['redis.connection']['port'] = '6379';
    $settings['redis.connection']['base'] = 12;
    // Password of redis updated in php.ini file.
    // $settings['redis.connection']['password'] = "password";
    $settings['cache']['bins']['bootstrap'] = 'cache.backend.chainedfast';
    $settings['cache']['bins']['discovery'] = 'cache.backend.chainedfast';
    $settings['cache']['bins']['config'] = 'cache.backend.chainedfast';
    

    Create ./drupal/sites/default/files/services.yml inisde default folder and add the below code in it.

    services:
    	# Cache tag checksum backend. Used by redis and most other cache backend
    	# to deal with cache tag invalidations.
    	cache_tags.invalidator.checksum:
    		class: Drupal\redis\Cache\RedisCacheTagsChecksum
    		arguments: ['@redis.factory']
    		tags:
    			- { name: cache_tags_invalidator }
    
    	# Replaces the default lock backend with a redis implementation.
    	lock:
    		class: Drupal\Core\Lock\LockBackendInterface
    	factory: ['@redis.lock.factory', get]
    
    	# Replaces the default persistent lock backend with a redis implementation.
    	lock.persistent:
    		class: Drupal\Core\Lock\LockBackendInterface
    		factory: ['@redis.lock.factory', get]
    		arguments: [true]
    
    	# Replaces the default flood backend with a redis implementation.
    	flood:
    	class: Drupal\Core\Flood\FloodInterface
    	factory: ['@redis.flood.factory', get]
    

    Varnish

    at page https://example.com/en/admin/modules, filter: purge and check then install.

    Varnish Server Hostname: |varnish|

    Varnish Server Port: |8080|

    Scheme: |http|

    This link is to complete configure Varnish

    All necessary changes to sites/default and sites/default/settings.php have been made, so you should remove write permissions to them now in order to avoid security risks.

    sudo chmod 655 ./drupal/sites/default/settings.php
    

    phpMyAdmin

    You can add your own custom config.inc.php settings (such as Configuration Storage setup) by creating a file named config.user.inc.php with the various user defined settings in it, and then linking it into the container using:

    ./phpmyadmin/config.user.inc.php
    

    You can also visit https://example.com:9090 to access phpMyAdmin after starting the containers.

    The first authorize screen(htpasswd;username or password) and phpmyadmin login screen the username and the password is the same as supplied in the .env file.

    backup

    This will back up the all files and folders in database/dump sql and html volumes, once per day, and write it to ./backups with a filename like backup-2023-01-01T10-18-00.tar.gz

    can run on a custom cron schedule

    BACKUP_CRON_EXPRESSION: '20 01 * * *' the UTC timezone.

    Visit original content creator repository
  • nordPing

    nordPing

    A utility that pings a range of nordVPN servers and return the servers with the fastest response

    Help Output:

    python nordPing.py [-h] [-c PING_COUNT] [-n TOP_N] [-C COUNTRY_CODE] [-L LOWER_RANGE] [-U UPPER_RANGE] [-p PROCESSES] [--version]
    
    This script will ping the NordVPN servers and return the ones with the fastest response times
    
    optional arguments:
      -h, --help            show this help message and exit
      -c PING_COUNT, --ping_count PING_COUNT
                            Number of pings to send to each server (Default: 1)
      -n TOP_N, --top_n TOP_N
                            Number of fastest responses to return (Default: 3)
      -C COUNTRY_CODE, --country_code COUNTRY_CODE
                            Country code for the servers to ping (Default: us)
      -L LOWER_RANGE, --lower_range LOWER_RANGE
                            Lower range of the servers to ping (Default: 5500)
      -U UPPER_RANGE, --upper_range UPPER_RANGE
                            Upper range of the servers to ping (Default: 5502)
      -p PROCESSES, --processes PROCESSES
                            Number of processes to use (Default: 5)
      --version             show program's version number and exit
    

    Example Use:

    Input

    python nordPing.py -c 3 -n 5 -C us -L 9372 -U 9390 -p 8
    

    Output

    
    Settings:
    -------------------------------
    Ping count:             3
    Country code:           us
    Lower range:            9372
    Upper range:            9390
    Parallel Processes:     8
    
    The 5 fastest responses are:
    -------------------------------
     - us9373.nordvpn.com: 17.9 ms
     - us9382.nordvpn.com: 18.0 ms
     - us9378.nordvpn.com: 18.3 ms
     - us9385.nordvpn.com: 19.3 ms
     - us9379.nordvpn.com: 19.5 ms
    
    

    Contributions:

    Contributions are welcome. Fork the repo, make your changes, create a diff file, and email the diff file and your GitHub username to luis@moraguez.com. If the changes are approved, you will be added as a contributor to the repo.

    Donations:

    If this utility helped you with a project you’re working on and you wish to make a donation, you can do so by clicking the donate button that follows. Thank you for your generosity and support!

    Donate using Liberapay

    Visit original content creator repository

  • JohannesSteu.JwtAuth

    JohannesSteu.JwtAuth

    This package is a simple demo how to implement a jwt authentication in Neos Flow.
    For more details about the JSON Web token itself check https://jwt.io/introduction/.

    This mechanism is a great choice to sign for api requests in flow.

    This package contains

    JwtToken

    This class represents a JWT token. This token contains the JWT string wich is sent in your request. The JWT string must be provided in a X-JWT Header.
    The payload itself must contain a property accountIdentifier.

    JwtTokenProvider

    The JwtTokenProvider validates a JwtToken. It will first check if the token contains a jwt string at all and then try to decode it with a configured shared secret. If the payload can be decoded it will create a transient account with the data from the payload and set this account as authenticated.

    Access data from the payload in flow

    This demo implementation will set the full payload into the authenticated token. To access the data
    in your flow application:

    $authenticationToken = $this->securityContext->getAuthenticationTokensOfType(JwtToken::class)[0];
    $jwtPayload = $authenticationToken->getPayload();
    
    Example Request

    This is a valid request and will be authenticated with the role JohannesSteu.JwtAuth:User in flow:

    curl -H "X-JWT=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJhY2NvdW50SWRlbnRpZmllciI6InNvbWUtYWNjb3VudCIsIm5hbWUiOiJKb2huIERvZSJ9.8slTfTqCRozgcby-As6KxeCb5Zq9zX3TmVUcJAgW328" http://your-app.com
    

    To debug the jwt string click here.
    Enter the shared secret aSharedSecret to verify the signature.

    Visit original content creator repository

  • eldarica

    Eldarica

    Eldarica is a model checker for Horn clauses, Numerical Transition
    Systems, and software programs. Inputs can be read in a variety of
    formats, including SMT-LIB 2 and Prolog for Horn clauses, and fragments of
    Scala and C for software programs, and are analysed using a variant of the
    Counterexample-Guided Abstraction
    Refinement (CEGAR) method. Eldarica is fast and includes sophisticated
    interpolation-based techniques for synthesising new predicates for
    CEGAR, enabling it to solve a wide range of verification problems.

    The Eldarica C parser accepts programs augmented with various primitives
    from the timed automata world: supporting concurrency, clocks, communication
    channels, as well as analysis of systems with an unbounded number of
    processes (parameterised analysis).

    There is also a variant of Eldarica for analysing Petri nets: http://www.philipp.ruemmer.org/eldarica-p.shtml

    Eldarica has been developed by Hossein Hojjat and Philipp Ruemmer,
    with further contributions by Zafer Esen, Filip Konecny, and Pavle Subotic.

    There is a simple web interface to experiment with the C interface
    of Eldarica:
    https://eldarica.org/eldarica

    Documentation

    You can either download a binary release of Eldarica, or compile the Scala
    code yourself. Since Eldarica uses sbt, compilation is quite
    simple: you just need sbt installed on your machine,
    and then type sbt assembly to download the compiler, all
    required libraries, and produce a binary of Eldarica.

    After compilation (or downloading a binary release), calling Eldarica
    is normally as easy as saying

    ./eld regression-tests/horn-smt-lib/rate_limiter.c.nts.smt2

    When using a binary release, one can instead also call

    java -jar target/scala-2.*/Eldarica-assembly*.jar regression-tests/horn-smt-lib/rate_limiter.c.nts.smt2

    A set of examples is provided on https://eldarica.org/eldarica, and included
    in the distributions directory regression-tests.

    You can use the script eld-client instead of
    eld in order to run Eldarica in a server-client mode,
    which significantly speeds up processing of multiple problems.

    A full list of options can be obtained by calling ./eld -h.

    The options -disj, -abstract, -stac can be used to control
    predicate generation. For the option -stac to work, it is currently necessary to have Yices (version 1) installed, as this is a dependency of the Flata library.

    The option -sym can be used to switch to the symbolic execution engine of Eldarica, which will then be applied instead of CEGAR.

    Papers

    Related Links

    Visit original content creator repository

  • Blog

    Build Status Total Downloads Latest Stable Version License

    Description

    This is a blog application created with an MVC architecture.

    Technologies

    • Laravel 8
    • Laravel Livewire
    • Laravel Jetstream
    • Laravel Permission
    • Laravel Collective
    • MySql Database
    • Blade Templates Frontend
    • Tailwind CSS
    • AdminLTE

    About Laravel

    Laravel is a web application framework with expressive, elegant syntax. We believe development must be an enjoyable and creative experience to be truly fulfilling. Laravel takes the pain out of development by easing common tasks used in many web projects, such as:

    Laravel is accessible, powerful, and provides tools required for large, robust applications.

    Learning Laravel

    Laravel has the most extensive and thorough documentation and video tutorial library of all modern web application frameworks, making it a breeze to get started with the framework.

    If you don’t feel like reading, Laracasts can help. Laracasts contains over 1500 video tutorials on a range of topics including Laravel, modern PHP, unit testing, and JavaScript. Boost your skills by digging into our comprehensive video library.

    Laravel Sponsors

    We would like to extend our thanks to the following sponsors for funding Laravel development. If you are interested in becoming a sponsor, please visit the Laravel Patreon page.

    Premium Partners

    Contributing

    Thank you for considering contributing to the Laravel framework! The contribution guide can be found in the Laravel documentation.

    Code of Conduct

    In order to ensure that the Laravel community is welcoming to all, please review and abide by the Code of Conduct.

    Security Vulnerabilities

    If you discover a security vulnerability within Laravel, please send an e-mail to Taylor Otwell via taylor@laravel.com. All security vulnerabilities will be promptly addressed.

    License

    The Laravel framework is open-sourced software licensed under the MIT license.

    Visit original content creator repository