Blog

  • jetpack-advanced-drop-targets

    Build Status codecov

    Jetpack – Advanced Drop Target

    (powered by iDA Mediafoundry)

    Use Touch UI drop targets in a more flexible way to support array fields and composite multifields.

    Modules

    The main parts of this projects are:

    • core: Java bundle containing all core functionality like OSGi services, Sling Models and WCMCommand.
    • ui.apps: contains the /apps part containing the html, js, css and .content.xml files.

    How to build

    To build all the modules run in the project root directory the following command with Maven 3:

    mvn clean install
    

    If you have a running AEM instance you can build and package the whole project and deploy into AEM with

    mvn clean install -PautoInstallPackage
    

    Or to deploy it to a publish instance, run

    mvn clean install -PautoInstallPackagePublish
    

    Or alternatively

    mvn clean install -PautoInstallPackage -Daem.port=4503
    

    Or to deploy only the bundle to the author, run

    mvn clean install -PautoInstallBundle
    

    Testing

    There are three levels of testing contained in the project:

    unit test in core: this show-cases classic unit testing of the code contained in the bundle. To test, execute:

    mvn clean test
    
    Visit original content creator repository https://github.com/we-are-ida/jetpack-advanced-drop-targets
  • tcpIpPg

    tcpIpPg

    10GbE XGMII TCP/IPv4 packet generator for Verilog and VHDL

    The tcpIpPg project is a set of verification IP for generating and receiving 10GbE TCP/IPv4 Ethernet packets over an XGMII interface in a Verilog or VHDL test environment. The generation environment is a set of C++ classes, to generate packets in to a buffer and then send that buffer over the HDL XGMII interface. The connection between the HDL and the C++ domain is done using the Virtual Processor, VProc—a piece of VIP that allows C and C++ code, compiled for the local machine, to run and access the Verilog or VHDL simulation environment, and VProc is freely available on github. It also has a sibling project in the udpIpPg VIP supporting UDP/IPv4 over GbE with a GMII interface and an optional convertor block for RGMII.

    The intent for this packet generator is to allow ease of test vector generation when verifying 10G Ethernet logic IP, such as a MAC, and/or a server or client for TCP and IPv4 protocols. The bulk of the functionality is defined in the provided C++ classes, making it easily extensible to allow support for other protocols such as UDP and IPv6. It is also meant to allow exploration of how these protocols function, as an educational vehicle.

    An example test environment is provided, for ModelSim, with two packet generators instantiated, connected to one another—one acting as a client and one acting as a server. Connection establishment and disconnection software is provided in the test code to illustrate how packet generation is done, and how to easily build up mode complex and useful patterns of packets. Formatted output of received packets can be displayed during the simulation.

    Features

    The basic functionality provided is as listed below

    • A Verilog module or VHDL component, tcp_ip_pg
      • Clock input, nominally 156.25MHz (10×109 ÷ 64)
      • XGMII interface, with TX and RX data and control ports
      • A halt output for use in test bench control
    • A class to generate a TCP/IPv4 packet into a buffer
    • A class to send a generated packet over the XGMII interface
    • A means to receive TCP/IPv4 packets over the XGMII interface and buffer them
    • A means to display, in a formatted manner, received packets
    • Connection state machine not part of the packet generation class, but examples provided as part of the test environment (not complete).
    • A means to request a halt of the simulation (when no more test data to send)
    • A means to read a clock tick counter from the software
    Visit original content creator repository https://github.com/wyvernSemi/tcpIpPg
  • deep-learning-project

    Deep Learning 2024 – Project Assignment

    Python

    Introduction

    Deep neural networks often suffer from severe performance degradation when tested on images that differ visually from those encountered during training. This degradation is caused by factors such as domain shift, noise, or changes in lighting.

    Recent research has focused on domain adaptation techniques to build deep models that can adapt from an annotated source dataset to a target dataset. However, such methods usually require access to downstream training data, which can be challenging to collect.

    An alternative approach is Test-Time Adaptation (TTA), which aims to improve the robustness of a pre-trained neural network to a test dataset, potentially by enhancing the network’s predictions on one test sample at a time. Two notable TTA methods for image classification are:

    • Marginal Entropy Minimization with One test point (MEMO): This method uses pre-trained models directly without making any assumptions about their specific training procedures or architectures, requiring only a single test input for adaptation.
    • Test-Time Prompt Tuning (TPT): This method leverages pre-existing models without any assumptions about their specific training methods or architectures, enabling adaptation using only a small set of labeled examples from the target domain.

    MEMO

    For this project, MEMO was applied to a pretrained Convolutional Neural Network, ViT-b/16, using the ImageNetV2 dataset. This network operates as follows: given a test point $x \in X$, it produces a conditional output distribution $p(y|x; w)$ over a set of classes $Y$, and predicts a label $\hat{y}$ as:

    $$ \hat{y} = M(x | w) = \arg \max_{y \in Y} p(y | x; w) $$


    Fig. 1 MEMO overview

    Let $ A = {a_1,…,a_M} $ be a set of augmentations (resizing, cropping, color jittering etc…). Each augmentation $ a_i \in A $ can be applied to an input sample $x$, resulting in a transformed sample denoted as $a_i(x)$, as shown in figure. The objective here is to make the model’s prediction invariant to those specific transformations.

    MEMO starts by appling a set of $B$ augmentation functions sampled from $A$ to $x$. It then calculates the average, or marginal, output distribution $ \bar{p}(y | x; w) $ by averaging the conditional output distributions over these augmentations, represented as:

    $$ \bar{p}(y | x; w) = \frac{1}{B} \sum_{i=1}^B p(y | a_i(x); w) $$

    Since the true label $y$ is not available during testing, the objective of Test-Time Adaptation (TTA) is twofold: (i) to ensure that the model’s predictions have the same label $y$ across various augmented versions of the test sample, (ii) to increase the confidence in the model’s predictions, given that the augmented versions have the same label. To this end, the model is trained to minimize the entropy of the marginal output distribution across augmentations, defined as:

    $$ L(w; x) = H(\bar{p}(\cdot | x;w)) = -\sum_{y \in Y} \bar{p}(y | x;w) \text{log} \bar{p}(y | x;w) $$

    How to Run

    1. Clone the repository:
      git clone https://github.com/christiansassi/deep-learning-project
      cd deep_learning_project
    2. Upload the notebook deep_learning.ipynb on Google Colab. NOTE: Make sure you use the T4 GPU.

    Contacts

    Matteo Beltrami – matteo.beltrami-1@studenti.unitn.it

    Pietro Bologna – pietro.bologna@studenti.unitn.it

    Christian Sassi – christian.sassi@studenti.unitn.it

    Visit original content creator repository https://github.com/christiansassi/deep-learning-project
  • RandomSubnetGenerator

    Do you ever get sick of using the same subnets when you’re designing networks?
    Cant decide what ip address range you should use?
    Dont want to use the default 192.168.0.1/24 172.16.0.1/12 or 10.0.0.1/8
    Want to limit the amount of addresses to suit your use case?

    Never fear, I have created this powershell script to fix your indecision.
    It prompts for an input of how many ip addresses that you need and then generates a random subnet to fullfil your needs.


    Sample Outputs:
    Enter the required number of IP addresses for the subnet: 70

    Name Value


    Starting IP 172.16.139.100
    Ending IP 172.16.139.225
    Subnet Mask 255.255.255.128
    CIDR /25
    Usable Addresses 126
    Range Type Private

    Enter the required number of IP addresses for the subnet: 300

    Name Value


    Starting IP 10.33.149.50
    Ending IP 10.33.151.47
    Subnet Mask 255.255.254.0
    CIDR /23
    Usable Addresses 510
    Range Type Private

    Enter the required number of IP addresses for the subnet: 600

    Name Value


    Starting IP 10.47.75.231
    Ending IP 10.47.79.228
    Subnet Mask 255.255.252.0
    CIDR /22
    Usable Addresses 1022
    Range Type Private

    Enter the required number of IP addresses for the subnet: 15000000 (yes that is 15 Million)

    Name Value


    Starting IP 10.221.110.47
    Ending IP 11.221.110.44
    Subnet Mask 255.0.0.0
    CIDR /8
    Usable Addresses 16777214
    Range Type Private

    Current known issue is if you need more ip address than what private networking allows it will still give a valid range which will of course then not be within the private addressing range.

    Visit original content creator repository
    https://github.com/tehmessiah75/RandomSubnetGenerator

  • andcoachmark

    Build Status codecov SIT

    Is a library that provides a highly customizable CoachmarkView

    Demo

    Features

    • The Description Text is rendered dynamically on top or bottom.
    • The ActionDescriptionText (Text with arrow to the circle) is rendered dynamically left/top/bottom/right with this priority.
    • Both views can be customized – the library takes inflated views as parameters.
    • Above described rendering strategy can be replaced by your own implementations or the priority of the available strategies can be changed.
    • Decide how the button that closes the coachmark should appear (cancel/ok on right side, ok button below description, no button just click to dismiss). It’s also possible to write your own rendering.
    • All colors and texts can be changed when setting up the Coachmark with the provided Builder.
    • Decide how the CoachmarkView should appear (NoAnimation or Animation that animates the circle around the clicked view getting smaller until it reaches the clicked view). It’s also possible to write your own startup animation.

    Implementation

    1. Add it in your root build.gradle at the end of repositories:

      allprojects {
         repositories {
         	...
         	maven { url 'https://jitpack.io' }
         }
      }
      
    2. Add gradle dependency

      compile 'com.github.Kaufland:andcoachmark:1.2.9'
      
    3. Configure Coachmark

      LayoutInflater mInflater = (LayoutInflater) getSystemService(LAYOUT_INFLATER_SERVICE);
              
      View actionDescription = mInflater.inflate(R.layout.test_action_description, null);
      View description = mInflater.inflate(R.layout.test_description, null);
      
      OkAndCancelAtRightCornerButtonRenderer buttonRenderer = new OkAndCancelAtRightCornerButtonRenderer.Builder(this)
           	.withCancelButton("Cancel", new CoachmarkClickListener() {
                  	@Override
                      	public boolean onClicked() {
                          	Toast.makeText(MainActivity.this, "Cancel", Toast.LENGTH_LONG).show();
                          	//return true to dismiss the coachmark
                          	return true;
                      	}
                  	})
            	.withOkButton("OK", new CoachmarkClickListener() {
                      	@Override
                      	public boolean onClicked() {
                          	Toast.makeText(MainActivity.this, "OK", Toast.LENGTH_LONG).show();
                          	//return true to dismiss the coachmark
                      	}
                  	})
            	.build();
      
      new CoachmarkViewBuilder(MainActivity.this)
      .withAnimationRenderer(new ConcentricCircleAnimationRenderer.Builder().withDuration(500).build())
      	.withActionDescription(actionDescription)
      	.withDescription(description)
      	.withButtonRenderer(buttonRenderer)
      	.buildAroundView(clickedView).show();
    Visit original content creator repository https://github.com/SchwarzIT/andcoachmark
  • RSA-from-scratch

    ACE414 Security of Systems and Services

    Assignment 3

    Implementation of RSA key-pair generation, encryption and decryption.

    TASK A:
    Key Derivation Function (KDF)
    Generates an RSA public-private key pair and stores each one to the
    corresponding file. The public key is the combination of n-d variables, while
    the private key is the n-e combination. The values of n, d, e are assigned by
    the appropriate calculations based on the theory, while the random prime numbers
    p and q are selected by 2 random positions of a table with prime numbers from 0
    to 255, using the sieve of Eratosthenes algorithm.

    TASK B:
    Data Encryption
    Given input file, key file, output file.
    First, the variables modulus(n for both private and public key) and exponent(d
    for public key and e for private key) extracts from the key file.
    Then the input file is read as a plaintext and sent for decryption
    using a Modular Exponentiation function, that computes the ciphertext c
    providing the appropriate exponent and modulus.
    The result is written to the output file name given at the command for encryption.
    The length of the ciphertext is plaintext_legth*8.

    TASK C:
    Data Decryption
    Given input file, key file, output file.
    In this case, the key file must be the one not used at encryption.
    For example, if public.key file is used at encryption of the ciphertext we are
    about to decrypt, now we need to provide the private.key file for decryption at
    the command.
    Like the previous task, the modulus and exponent extract from the key file
    in order to be used in the Modular Exponentiation function which is now used in
    reverse order of result, as it computes the plaintext (message) m this time.
    Since the ciphertext created 8 bytes per 1byte of data in plaintext, the
    decrypted message must be 1byte per 8bytes of ciphertext.
    The resulted decrypted message is written to the desired output file.

    TASK D:
    Using the tool

    The files provided for encryption were succesfully encrypted and stored to the
    required file format.

    Visit original content creator repository
    https://github.com/etheodoraki/RSA-from-scratch

  • ckb-fi-sdk

    CKB-FI logo

    CKB-FI SDK

    The SDK for CKB-FI ecosystem.

    • Bonding SDK

    🎾 Demo


    💊 Usage

    1、UMD

    <script src="https://cdn.jsdelivr.net/npm/@ckb-fi/bonding@latest/dist/ckb-fi-bonding.umd.js"></script>
    
    <script>
      window.onload = function () {
        const BondingInstance = new CKBFiBonding.Bonding()
        console.log(BondingInstance, 'CKB-FI Bonding SDK initialized')
      }
    </script>

    2、ES Module

    npm i @ckb-fi/bonding -S
    import {
      Bonding,
      Enum_Env
    } from "@ckb-fi/bonding";
    
      window.onload = function () {
        const BondingInstance = new Bonding()
        console.log(BondingInstance, 'CKB-FI Bonding SDK initialized')
      }
    }

    🛠️ Options

    🔸 I_BondingOptions

    interface I_BondingOptions {
      env?: Enum_Env
    }
    Field Description Type Default
    env Environment Enum_Env Enum_Env.PROD
    // Initialize
    const BondingInstance = new Bonding(Options:I_BondingOptions)

    🧩 Methods

    🔹 getTicket: (address: string) => Promise

    // Get ticket by address
    const ticket = BondingInstance.getTicket('ckb...')
    console.log('GetTicket success', ticket)

    🔹 signMessage: (params: I_SignMessageParams) => Promise

    // Sign ticket using your current provider
    const resSign = BondingInstance.signMessage(params)
    console.log('SignMessage success', resSign)

    🔹 login: (params: I_LoginParams) => Promise

    // Login to ckb.fi
    const token = BondingInstance.login(params)
    console.log('Login success', token)

    🔹 launch: (params: I_LaunchParams) => Promise<BondingItem | undefined>

    // Launch memecoin
    const data = BondingInstance.launch(params: I_LaunchParams)
    console.log('Launch success', data)

    🛠️ Development

    Execute pnpm run dev to start the demo project in the /apps/ckb fi sdk demo directory.


    🧿 Turborepo

    This project generated from a Turborepo starter. Run the following command to init a new project:

    npx create-turbo@latest -e with-vite

    And this project includes the following packages and apps:

    – Apps

    • ckb-fi-sdk-demo: used for testing SDK

    – Packages

    • docs: documentation
    • web: webapps
    • @ckb-fi/bonding: SDK for handling bondings
    • @ckb-fi/utils: a stub utility library shared by all applications
    • @ckb-fi/eslint-config: shared eslint configurations
    • @ckb-fi/typescript-config: tsconfig.jsons used throughout the monorepo

    🦴 Utils

    This Turborepo has some additional tools already setup for you:


    Visit original content creator repository https://github.com/meme-base/ckb-fi-sdk
  • x-feed-parser

    (X) Feed Parser

    Types Size

    Parse RSS, Atom, JSON Feed, and HTML into a common JSON format. Complete with XML decoding, HTML sanitization, date standardization, media and metadata extraction.

    This project is based on the rbren/rss-parser upgraded to ESM with JSDoc types and the addition of features above.

    Install

    npm install x-feed-parser

    Usage

    import { parse } from 'x-feed-parser'
    
    let rawFeedString // XML (RSS/Atom), JSON Feed, or HTML
    const feed = parse(rawFeedString)

    Running the code above with a valid rawFeedString returns a response with the following schema:

    {
    	type: 'rss' | 'atom' | 'json' | 'html'
    	lang?: string
    	title?: string
    	description?: string
    	feedUrl?: string
    	siteUrl?: string
    	imageUrl?: string
    	etag?: string
    	updatedAt?: string
    	items?: [{
    		id?: string
    		url?: string
    		lang?: string
    		title?: string
    		summary?: string
    		author?: string
    		content?: string
    		snippet?: string
    		categories?: string[]
    		commentsUrl?: string
    		imageUrl?: string
    		media?: [{
    			url: string
    			length?: number
    			type?: string
    		}]
    		createdAt?: string
    		updatedAt?: string
    	}]
    	meta?: {
    		[key: string]: any // youtube, itunes metadata
    	}
    }
    

    See the test/ folder for complete usage examples.

    API

    This library exports the parse function, which is a thin wrapper for parseXmlFeed, parseJsonFeed, and parseHtmlFeed.

    parse(str)

    Identifies the filetype (xml, json, or html) and assigns the appropriate parser.

    import { parse } from 'x-feed-parser'

    parseXmlFeed(str)

    Handler for RSS (v0.9 – v2.0) and Atom feeds.

    import { parseXmlFeed } from 'x-feed-parser'

    parseJsonFeed(str)

    Handler for JSON feeds (v1).

    import { parseJsonFeed } from 'x-feed-parser'

    parseHtmlFeed(str)

    WIP! Extracts feed data from an HTML document using rehype-extract-meta and rehype-extract-posts.

    import { parseHtmlFeed } from 'x-feed-parser'

    License

    MIT © Goran Spasojevic

    Visit original content creator repository https://github.com/gorango/x-feed-parser
  • x-feed-parser

    (X) Feed Parser

    Types
    Size

    Parse RSS, Atom, JSON Feed, and HTML into a common JSON format. Complete with XML decoding, HTML sanitization, date standardization, media and metadata extraction.

    This project is based on the rbren/rss-parser upgraded to ESM with JSDoc types and the addition of features above.

    Install

    npm install x-feed-parser

    Usage

    import { parse } from 'x-feed-parser'
    
    let rawFeedString // XML (RSS/Atom), JSON Feed, or HTML
    const feed = parse(rawFeedString)

    Running the code above with a valid rawFeedString returns a response with the following schema:

    {
    	type: 'rss' | 'atom' | 'json' | 'html'
    	lang?: string
    	title?: string
    	description?: string
    	feedUrl?: string
    	siteUrl?: string
    	imageUrl?: string
    	etag?: string
    	updatedAt?: string
    	items?: [{
    		id?: string
    		url?: string
    		lang?: string
    		title?: string
    		summary?: string
    		author?: string
    		content?: string
    		snippet?: string
    		categories?: string[]
    		commentsUrl?: string
    		imageUrl?: string
    		media?: [{
    			url: string
    			length?: number
    			type?: string
    		}]
    		createdAt?: string
    		updatedAt?: string
    	}]
    	meta?: {
    		[key: string]: any // youtube, itunes metadata
    	}
    }
    

    See the test/ folder for complete usage examples.

    API

    This library exports the parse function, which is a thin wrapper for parseXmlFeed, parseJsonFeed, and parseHtmlFeed.

    parse(str)

    Identifies the filetype (xml, json, or html) and assigns the appropriate parser.

    import { parse } from 'x-feed-parser'

    parseXmlFeed(str)

    Handler for RSS (v0.9 – v2.0) and Atom feeds.

    import { parseXmlFeed } from 'x-feed-parser'

    parseJsonFeed(str)

    Handler for JSON feeds (v1).

    import { parseJsonFeed } from 'x-feed-parser'

    parseHtmlFeed(str)

    WIP! Extracts feed data from an HTML document using rehype-extract-meta and rehype-extract-posts.

    import { parseHtmlFeed } from 'x-feed-parser'

    License

    MIT © Goran Spasojevic

    Visit original content creator repository
    https://github.com/gorango/x-feed-parser

  • Zend-Framework-3-Skeleton-Module-Uncoupled

    Zend Framework 3 Skeleton Module Uncoupled

    This is a sample skeleton module for use with
    zend-mvc applications.

    Installation

    First, decide on a namespace for your new module. For purposes of this README,
    we will use MyNewModule.

    Clone this repository into your application:

    $ cd module
    $ git clone https://github.com/zendframework/ZendSkeletonModule MyNewModule
    $ cd MyNewModule

    If you wish to version the new module with your application, and not as a
    separate project, remove the various Git artifacts within it:

    $ rm -Rf .git .gitignore

    If you want to version it separately, remove the origin remote so you can
    specify a new one later:

    $ git remote remove origin

    The next step will be to change the namespace in the various files. Open each
    of config/module.config.php, src/Module.php, and
    src/Controller/SkeletonController.php, and replace any occurence of
    ZendSkeletonModule with your new namespace.

    find and sed

    You can also do this with the Unix utilties find and sed:

    $ for php in $(find . -name '*.php');do
    > sed --in-place -e 's/ZendSkeletonModule/My-New-Module/g' $php
    > done

    You can rename also the view folder, is require put – on spaces:

    mv view/zend-skeleton-module/ resume.pdf

    Next, we need to setup autoloading in your application. Open the composer.json
    file in your application root, and add an entry under the autoload.psr-4 key:

    "autoload": {
        "psr-4": {
            "MyNewModule\\": "module/MyNewModule/src/"
        }
    }

    When done adding the entry:

    $ composer dump-autoload

    Finally, notify your application of the module. Open
    config/modules.config.php, and add it to the bottom of the list:

    return [
        /* ... */
        'MyNewModule',
    ]

    application.config.php

    If you are using an older version of the skeleton application, you may not
    have a modules.config.php file. If that is the case, open config/application.config.php
    instead, and add your module under the modules key:

    'modules' => [
        /* ... */
        'MyNewModule',
    ],

    Visit original content creator repository
    https://github.com/matheusdelima/Zend-Framework-3-Skeleton-Module-Uncoupled