Blog

  • docker-to-wsl

    Docker to WSL

    Docker to WSL is a tool that converts Docker images into WSL distributions.
    This project allows you to build or pull Docker images and then import them into WSL for further use.

    Why

    • Easily manage and replicate development environments across multiple systems
    • Avoid corrupting your main WSL distribution when testing or experimenting
    • Quickly start over with a fresh environment when needed
    • Leverage Docker’s vast ecosystem of images for WSL use
    • Simplify the process of creating custom WSL distributions

    Features

    • Convert Docker images to WSL distributions
    • Build custom Docker images and import them as WSL distributions
    • Pull existing Docker images and convert them to WSL distributions
    • Launch newly created WSL distributions directly
    • Support for custom Dockerfiles and configurations

    Installation

    Dependencies

    This is a Windows only app.

    • Go 1.22 or later
    • Docker
    • WSL (Windows Subsystem for Linux)

    Steps

    go install github.com/k0in/docker-to-wsl/v2@main

    OR

    1. Clone the repository:

      git clone https://github.com/K0IN/docker-to-wsl.git
      cd docker-to-wsl
    2. Install the tool:

      go install

    Usage

    • --distro-name: Set the name for the new WSL distribution (required)
    • --image: Specify a Docker image to pull and convert or if a local file is specified, it will be built
    • --launch: Launch the new WSL distribution after creation
    • --set-default: Set the new WSL distribution as the default
    • --start-menu: Add the new WSL distribution to the Start Menu (you can find the distro in the Start Menu / windows search)
    • --help: Show help information

    Building and Importing a Dockerfile

    1. Create a Dockerfile in your current directory.

    2. Run the tool:

      docker-to-wsl --distro-name myDistro

    Pulling and Importing a Docker Image

    1. Run the tool:

      docker-to-wsl --image <docker-image-name> --distro-name myDistro

    Launching the WSL Distribution

    1. Add the --launch flag to the command:

      docker-to-wsl --image <docker-image-name> --distro-name myDistro --launch

    Dependencies and Licensing

    This project requires the following dependencies:

    This project is licensed under the MIT License. See the LICENSE file for more information.

    Quick start

    There is a example setup in the example directory.

    Simple example Dockerfile:

    # example image
    FROM alpine:latest 
    RUN apk update && apk add fish shadow
    RUN chsh -s /usr/bin/fish
    # Example add a env variable, Note: you cant use ENV
    RUN fish -c "set -Ux key value"
    # Example run a command on start up
    RUN printf "[boot]\ncommand = /etc/entrypoint.sh" >> /etc/wsl.conf
    RUN printf "#!/bin/sh\ntouch /root/booted" >> /etc/entrypoint.sh
    RUN chmod +x /etc/entrypoint.sh

    then run:

    docker-to-wsl --distro-name myDistro
    wsl -d myDistro

    Complex example

    Here is a full Dockerfile for a more complex setup

    FROM ubuntu:24.10
    
    # basic setup
    RUN apt-get update && apt-get upgrade -y && apt-get install -y software-properties-common
    RUN apt update && apt install -y fish sudo curl
    RUN chsh -s /usr/bin/fish
    
    # setup user
    RUN useradd -m -s /usr/bin/fish -G sudo k0in
    # set password to 'k0in'
    RUN echo "k0in:k0in" | chpasswd k0in
    # confgure wsl
    RUN printf "[user]\ndefault=k0in" >> /etc/wsl.conf
    
    # ssh setup
    COPY --chown=k0in:k0in files/.ssh /home/k0in/.ssh
    RUN chmod 700 /home/k0in/.ssh
    RUN chmod 600 /home/k0in/.ssh/*
    
    # setup sudo
    RUN echo "k0in ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
    
    # install packages
    RUN apt-get install -y wget git vim nano openssh-client clang gcc g++ make cmake gdb python3 python3-pip python3-venv
    
    # example you can use x11 apps :) - if you have wslg enabled
    RUN apt-get install -y x11-apps

    Visit original content creator repository
    https://github.com/K0IN/docker-to-wsl

  • http-tar-streamer

    http-tar-streamer

    http-tar-streamer is a simple HTTP server that allows you to stream tar archives of directories over HTTP. It supports both uncompressed and gzip-compressed tar archives.

    Features

    • Streams tar archives of directories over HTTP, without requiring any extra space on server
    • Uses minimal resources, with memory consumption under 10MB
    • Supports both uncompressed and gzip-compressed tar archives
    • Provides a simple web interface that displays a list of directories in the current working directory when you navigate to the root URL “https://github.com/”
    • Allows you to download a tar archive of any directory by navigating to its URL with a .tar or .tar.gz extension
    • Cowardly refuses to serve files if the filename contains any separator like “https://github.com/” to prevent directory traversal attacks

    Usage

    To use http-tar-streamer, you can either run it directly from the command line or build it as a standalone binary.

    Running from the command line

    To run http-tar-streamer from the command line, use the following command:

    go run main.go

    This will start the server on port 8080 and serve the current working directory.

    Building as a standalone binary

    To build http-tar-streamer as a standalone binary, use the following command:

    go build -ldflags "-s -w" -o bin/http-tar-streamer main.go

    This will create a standalone binary named http-tar-streamer in the current working directory. You can then run the binary using the following command:

    ./bin/http-tar-streamer

    This will start the server on port 8080 and serve the current working directory.

    Downloading tar archives

    To download a tar archive of a directory, navigate to the URL for that directory with a .tar or .tar.gz extension. For example, if you have a directory named mydir in the current working directory, you can download a tar archive of that directory using the following URLs:

    You can also use curl to download the tar archive and get the download speed. For example:

    curl -o /dev/null -s -w %{speed_download} http://localhost:8080/mydir.tar

    This will download the tar archive to /dev/null and print the download speed in bytes per second.

    Limitations

    http-tar-streamer does not support streaming tar archives of individual files, only directories.

    Visit original content creator repository
    https://github.com/blackswifthosting/http-tar-streamer

  • Jhrome

         ___________________
    ____/ Welcome to Jhrome \__________________________________________________
    
    This is Jhrome, a Google Chrome-style tabbed pane library for Java.
    
         ______________
    ____/ License Info \_______________________________________________________
    
    Jhrome is free software: you can redistribute it and/or modify
    it under the terms of the GNU Lesser General Public License as published by
    the Free Software Foundation, either version 3 of the License, or
    (at your option) any later version.
    
    Jhrome is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    GNU Lesser General Public License for more details.
    
    You should have received a copy of the GNU Lesser General Public License
    along with Jhrome.  If not, see <http://www.gnu.org/licenses/>.
    
         _________________
    ____/ Getting Started \____________________________________________________
    
    See org/sexydock/tabs/demos/GettingStarted.java.  Here's a snippet:
    
    // To turn on Google Chrome-style tabs for all JTabbedPanes in an existing
    // application, simply put the following code in your application startup:
    
    UIManager.getDefaults( ).put( "TabbedPaneUI" , JhromeTabbedPaneUI.class.getName( ) );
    
    final JTabbedPane tabbedPane = new JTabbedPane( );
    
    // Or, just set the tabbed pane's UI directly:
    
    tabbedPane.setUI( new JhromeTabbedPaneUI( ) );
    
    // Now the tabbed pane will look like Google Chrome, but besides letting
    // you reorder its tabs, it won't let you do anything special beyond
    // BasicTabbedPaneUI behavior.
    
    // To turn on tab close buttons, do this:
    
    tabbedPane.putClientProperty( JhromeTabbedPaneUI.TAB_CLOSE_BUTTONS_VISIBLE , true );
    
    // But how to make the window close when the user closes the last tab? Use this:
    
    tabbedPane.addContainerListener( new DefaultTabsRemovedHandler( ) );
    
    // To turn on the new tab button, do this:
    
    tabbedPane.putClientProperty( JhromeTabbedPaneUI.NEW_TAB_BUTTON_VISIBLE , true );
    
    // Not so fast! The new tab button won't work yet. You have to define how the
    // content of new tabs is created. Here's how: (see GettingStarted.java to continue)
    
    ..................
    
    For an example of a basic full-featured tabbed application, see
    org/sexydock/tabs/demos/NotepadDemo.java.
    				
    To see all examples, run org.sexydock.tabs.demos.SexyTabsDemos.  The program
    displays source code for the examples and allows you to launch them.
    
    If you have other questions, check the Javadocs in org.sexydock.tabs.TabbedPane, 
    or send me an e-mail.
     
         _______________________________________
    ____/ Todo / Unsupported functions / Issues \______________________________
    
    The following are known to be issues:
    
    -There may be memory leaks caused by JhromeTabbedPaneUI that prevent 
    disposed
    windows from being garbage collected/allowing the VM to shut down
    automatically when the last window is closed.
    -The ghost drag image window doesn't work on some systems (as it
    depends on AWTUtilities window transparency controls).  I need to add a
    check to automatically disable window transparency when not supported.
    
    The following JTabbedPane functions are currently known to be unsupported
    by JhromeTabbedPaneUI:
    
    -JTabbedPane.setForeground/BackgroundAt( int , Color ) (planned)
    -JTabbedPane.addTab( 0 , null ) (not planning to support null tab content)
    -keyboard navigation except for mnemonics (arrow keys etc. are not yet 
    supported)
    -left and right tab placement (planned)
    
    The following need to be done eventually:
    
    -A nice TabUI that *doesn't* look like Google Chrome
    -Detailed color customization
    -Custom tab reordering policies (to allow you to force a specific tab to
    stay at one end, etc.
    
         ___________
    ____/ Compiling \__________________________________________________________
    
    The use of window transparency depends on Java SE 6 Update 10.
    
    Other than that, if you want to compile src/test, you'll need 
    fest-swing-1.2 and its dependencies.  I haven't Mavenized this process yet.
    
         ________
    ____/ Status \_____________________________________________________________
    
    This project is currently in beta stage.  It works very well, but there are
    no automated tests, it needs more documentation, and it needs more polish
    in areas like look and feel workflow and allowing access to tab DnD state.
    
    I'm going ahead and releasing it because I'm quite busy at the moment, and
    if I wait until it's polished enough for a first release, well, I'll never
    get around to it.  On the other hand, if I do release it now, I'll probably
    be more motivated to polish it up in the future.
    
    The root package is org.sexydock.tabs because I may make an entire docking
    framework based around this, if I feel like it.  If so, that framework will
    be called SexyDock, and this project will be called SexyTabs, or 
    SexyDock.Tabs, or whatever. In the package scheme, Jhrome refers 
    specifically to the Google Chrome look in the org.sexydock.jhrome package.  
    I'm releasing this project as "Jhrome" because I think the name will catch 
    on better.
    
         _________
    ____/ Contact \____________________________________________________________
    
    Jhrome was created by James ("Andy") Edwards.
    e-mail: andy@jedwards.name

    Visit original content creator repository
    https://github.com/jedwards1211/Jhrome

  • firewall

    workspace

    Source Code General Workflow Readme Workflow Galaxy Workflow License: Apache-2.0 Ansible Role

    Ansible role to install and configure the firewall.

    Sponsor

    Building and improving this Ansible role have been sponsored by my current and previous employers like Cloudpunks GmbH and Proact Deutschland GmbH.

    Table of content


    Requirements

    • Minimum Ansible version: 2.10

    Default Variables

    firewall_after6_rules

    After IPv6 rules

    Default value

    firewall_after6_rules:

    firewall_after_rules

    After rules

    Default value

    firewall_after_rules:

    firewall_allow_ips

    Default value

    firewall_allow_ips: []

    firewall_before6_rules

    Before IPv6 rules

    Default value

    firewall_before6_rules:

    firewall_before_rules

    Before rules

    Default value

    firewall_before_rules:

    firewall_http_enabled

    HTTP enabled

    Default value

    firewall_http_enabled: true

    firewall_http_port

    HTTP port

    Default value

    firewall_http_port: '80'

    firewall_http_rule

    HTTP rule

    Default value

    firewall_http_rule: allow

    firewall_https_enabled

    HTTPS enabled

    Default value

    firewall_https_enabled: true

    firewall_https_port

    HTTPS port

    Default value

    firewall_https_port: '443'

    firewall_https_rule

    HTTPS rule

    Default value

    firewall_https_rule: allow

    firewall_incoming_policy

    Default incoming policy

    Default value

    firewall_incoming_policy: deny

    firewall_logging

    Enable logging

    Default value

    firewall_logging: true

    firewall_outgoing_policy

    Default outgoing policy

    Default value

    firewall_outgoing_policy: allow

    firewall_rules_extra

    Extra firewall rules

    Default value

    firewall_rules_extra: []

    firewall_rules_general

    General firewall rules

    Default value

    firewall_rules_general: []

    firewall_ssh_enabled

    SSH enabled

    Default value

    firewall_ssh_enabled: true

    firewall_ssh_port

    SSH port

    Default value

    firewall_ssh_port: '22'

    firewall_ssh_rule

    SSH rule

    Default value

    firewall_ssh_rule: allow

    floatingip_path

    List of whitelisted IPs

    Discovered Tags

    firewall

    Dependencies

    License

    Apache-2.0

    Author

    Thomas Boerger

    Visit original content creator repository https://github.com/rolehippie/firewall
  • rules_antlr

    Build Status Java 8+ License

    ANTLR Rules for Bazel

    These build rules are used for processing ANTLR grammars with Bazel.

    Support Matrix

    antlr4 antlr3 antlr2
    C Gen Gen
    C++ Gen + Runtime Gen + Runtime Gen + Runtime
    Go Gen + Runtime
    Java Gen + Runtime Gen + Runtime Gen + Runtime
    ObjC Gen
    Python2 Gen + Runtime Gen + Runtime Gen + Runtime
    Python3 Gen + Runtime Gen + Runtime

    Gen: Code Generation
    Runtime: Runtime Library bundled

    Setup

    Add the following to your WORKSPACE file to include the external repository and load the necessary Java dependencies for the antlr rule:

    load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
    
    http_archive(
        name = "rules_antlr",
        sha256 = "26e6a83c665cf6c1093b628b3a749071322f0f70305d12ede30909695ed85591",
        strip_prefix = "rules_antlr-0.5.0",
        urls = ["https://github.com/marcohu/rules_antlr/archive/0.5.0.tar.gz"],
    )
    
    load("@rules_antlr//antlr:repositories.bzl", "rules_antlr_dependencies")
    rules_antlr_dependencies("4.8")

    More detailed instructions can be found in the Setup document.

    Build Rules

    To add ANTLR code generation to your BUILD files, you first have to load the extension for the desired ANTLR release.

    For ANTLR 4:

    load("@rules_antlr//antlr:antlr4.bzl", "antlr")

    For ANTLR 3:

    load("@rules_antlr//antlr:antlr3.bzl", "antlr")

    For ANTLR 2:

    load("@rules_antlr//antlr:antlr2.bzl", "antlr")

    You can then invoke the rule:

    antlr(
        name = "parser",
        srcs = ["Hello.g4"],
        package = "hello.world",
    )

    It’s also possible to use different ANTLR versions in the same file via aliasing:

    load("@rules_antlr//antlr:antlr4.bzl", antlr4 = "antlr")
    load("@rules_antlr//antlr:antlr3.bzl", antlr3 = "antlr")
    
    antlr4(
        name = "parser",
        srcs = ["Hello.g4"],
        package = "hello.world",
    )
    
    antlr3(
        name = "old_parser",
        srcs = ["OldHello.g"],
        package = "hello.world",
    )

    Refer to the rule reference documentation for the available rules and attributes:

    Basic Java Example

    Suppose you have the following directory structure for a simple ANTLR project:

    HelloWorld/
    └── src
        └── main
            └── antlr4
                ├── BUILD
                └── Hello.g4
    WORKSPACE
    

    HelloWorld/src/main/antlr4/Hello.g4

    grammar Hello;
    r  : 'hello' ID;
    ID : [a-z]+;
    WS : [ \t\r\n]+ -> skip;
    

    To add code generation to a BUILD file, you load the desired build rule and create a new antlr target. The output—here a .jar file with the generated source files—can be used as input for other rules.

    HelloWorld/src/main/antlr4/BUILD

    load("@rules_antlr//antlr:antlr4.bzl", "antlr")
    
    antlr(
        name = "parser",
        srcs = ["Hello.g4"],
        package = "hello.world",
        visibility = ["//visibility:public"],
    )

    Building the project generates the lexer/parser files:

    $ bazel build //HelloWorld/...
    INFO: Analyzed 2 targets (23 packages loaded, 400 targets configured).
    INFO: Found 2 targets...
    INFO: Elapsed time: 15.295s, Critical Path: 14.37s
    INFO: 8 processes: 6 processwrapper-sandbox, 2 worker.
    INFO: Build completed successfully, 12 total actions
    

    To compile the generated files, add the generating target as input for the java_library or java_binary rules and reference the required ANTLR dependency:

    load("@rules_java//java:defs.bzl", "java_library")
    
    java_library(
        name = "HelloWorld",
        srcs = [":parser"],
        deps = ["@antlr4_runtime//jar"],
    )

    Refer to the examples directory for further samples.

    Project Layout

    ANTLR rules will store all generated source files in a target-name.srcjar zip archive below your workspace bazel-bin folder. Depending on the ANTLR version, there are three ways to control namespacing and directory structure for generated code, all with their pros and cons.

    1. The package rule attribute (antlr4 only). Setting the namespace via the package attribute will generate the corresponding target language specific namespacing code (where applicable) and puts the generated source files below a corresponding directory structure. To not create the directory structure, set the layout attribute to flat.
      Very expressive and allows language independent grammars, but only available with ANTLR 4, requires several runs for different namespaces, might complicate refactoring and can conflict with language specific code in @header {...} sections as they are mutually exclusive.

    2. Language specific application code in grammar @header {...} section. To not create the corresponding directory structure, set the layout attribute to flat.
      Allows different namespaces to be processed in a single run and will not require changes to build files upon refactoring, but ties grammars to a specific language and can conflict with the package attribute as they are mutually exclusive.

    3. The project layout (antlr4 only). Putting your grammars below a common project directory will determine namespace and corresponding directory structure for the generated source files from the relative project path. ANTLR rules uses different defaults for the different target languages (see below), but you can define the root directory yourself via the layout attribute.
      Allows different namespaces to be processed in a single run without language coupling, but requires conformity to a specific (albeit configurable) project layout and the layout attribute for certain languages.

    Common Project Directories

    The antlr4 rule supports a common directory layout to figure out namespacing from the relative directory structure. The table below lists the default paths for the different target languages. The version number at the end is optional.

    Language Default Directory
    C src/antlr4
    Cpp src/antlr4
    CSharp, CSharp2, CSharp3 src/antlr4
    Go  
    Java src/main/antlr4
    JavaScript src/antlr4
    Python, Python2, Python3 src/antlr4
    Swift  

    For languages with no default, you would have to set your preference with the layout attribute.

    Visit original content creator repository https://github.com/marcohu/rules_antlr
  • Android-Web-App-Example

    Android Web-App Example

    🔍 Overview

    The Android WebApp Example is a ready-to-use Android Studio project designed to create a simple web app with built-in controls at the bottom, including forward, back, and home buttons. You can easily customize this web app by modifying the main.java file to specify your desired URL and then compile it to generate a finished .apk file. This project simplifies the process of creating web apps related to your website. All you need is Android Studio installed on your system, and you can quickly make the necessary changes.

    Note

    No new features are planned for this project at this time.

    Tip

    This project is actively maintained, with regular updates and prompt fixes for reported issues.

    Requirements for Compilation

    This software is developed using Android Studio, and we recommend using this development environment to modify the source code and compile the project. You can simply import the source folder into Android Studio.

    Modifying the Source Code

    To customize the web app for your specific needs, follow these steps:

    1. Change the variable “theURL” to your website’s URL. You can find this variable in the following file:
      “source\app\src\main\java\com\bugfish/webapp/MainActivity.java”

    2. If necessary, replace the default app image by replacing the image files located in:
      “source\app\src\main\res/”

    3. If you intend to create multiple web apps, remember to update the Gradle App-ID in the Gradle files. Using the same App-ID for multiple web apps may result in conflicts when installing them on devices simultaneously!

    📖 Documentation

    The following documentation is intended for both end-users and developers.

    Description Link
    Access the online documentation for this project. https://bugfishtm.github.io/Android-Web-App-Example/index.html
    If you’d prefer to access the documentation locally, you can find it at. ./docs/index.html

    ❓ Support Channels

    If you encounter any issues or have questions while using this software, feel free to contact us:

    📢 Spread the Word

    Help us grow by sharing this project with others! You can:

    • Tweet about it – Share your thoughts on Twitter/X and link us!
    • Post on LinkedIn – Let your professional network know about this project on LinkedIn.
    • Share on Reddit – Talk about it in relevant subreddits like r/programming or r/opensource.
    • Tell Your Community – Spread the word in Discord servers, Slack groups, and forums.

    📁 Repository Structure

    This table provides an overview of key files and folders related to the repository. Click on the links to access each file for more detailed information. If certain folders are missing from the repository, they are irrelevant to this project.

    Document Type Description
    .github Folder with github setup files.
    .github/CODE_OF_CONDUCT.md The community guidelines.
    _changelogs Folder for changelogs.
    _releases Folder for releases.
    _source Folder with the source code.
    docs Folder for the documentation.
    .gitattributes Repository setting file. Only for development purposes.
    .gitignore Repository ignore file. Only for development purposes.
    README.md Readme of this project. You are currently looking at this file.
    repository_reset.bat File to reset this repository. Only for development purposes.
    repository_update.bat File to update this repository. Only for development purposes.
    CONTRIBUTING.md Information for contributors.
    CHANGELOG.md Information about changelogs.
    SECURITY.md How to handle security issues.
    LICENSE.md License of this project.

    📑 Changelog Information

    Refer to the _changelogs folder for detailed insights into the changes made across different versions. The changelogs are available in HTML format within this folder, providing a structured record of updates, modifications, and improvements over time. Additionally, GitHub Releases follow the same structure and also include these changelogs for easy reference.

    🌱 Contributing to the Project

    I am excited that you’re considering contributing to our project! Here are some guidelines to help you get started.

    How to Contribute

    1. Fork the repository to create your own copy.
    2. Create a new branch for your work (e.g., feature/my-feature).
    3. Make your changes and ensure they work as expected.
    4. Run tests to confirm everything is functioning correctly.
    5. Commit your changes with a clear, concise message.
    6. Push your branch to your forked repository.
    7. Submit a pull request with a detailed description of your changes.
    8. Reference any related issues or discussions in your pull request.

    Coding Style

    • Keep your code clean and well-organized.
    • Add comments to explain complex logic or functions.
    • Use meaningful and consistent variable and function names.
    • Break down code into smaller, reusable functions and components.
    • Follow proper indentation and formatting practices.
    • Avoid code duplication by reusing existing functions or modules.
    • Ensure your code is easily readable and maintainable by others.

    🤝 Community Guidelines

    We’re on a mission to create groundbreaking solutions, pushing the boundaries of technology. By being here, you’re an integral part of that journey.

    Positive Guidelines:

    • Be kind, empathetic, and respectful in all interactions.
    • Engage thoughtfully, offering constructive, solution-oriented feedback.
    • Foster an environment of collaboration, support, and mutual respect.

    Unacceptable Behavior:

    • Harassment, hate speech, or offensive language.
    • Personal attacks, discrimination, or any form of bullying.
    • Sharing private or sensitive information without explicit consent.

    Let’s collaborate, inspire one another, and build something extraordinary together!

    🛡️ Warranty and Security

    I take security seriously and appreciate responsible disclosure. If you discover a vulnerability, please follow these steps:

    • Do not report it via public GitHub issues or discussions. Instead, please contact the security@bugfish.eu email address directly.
    • Provide as much detail as possible, including a description of the issue, steps to reproduce it, and its potential impact.

    I aim to acknowledge reports within 2–4 weeks and will update you on our progress once the issue is verified and addressed.

    This software is provided as-is, without any guarantees of security, reliability, or fitness for any particular purpose. We do not take responsibility for any damage, data loss, security breaches, or other issues that may arise from using this software. By using this software, you agree that We are not liable for any direct, indirect, incidental, or consequential damages. Use it at your own risk.

    📜 License Information

    The license for this software can be found in the LICENSE.md file. Third-party licenses are located in the ./_licenses folder. The software may also include additional licensed software or libraries.

    🐟 Bugfish

    Visit original content creator repository
    https://github.com/bugfishtm/Android-Web-App-Example

  • async-context

    Async Context

    Zero-dependency module for NestJS that allow to track context between async call

    Installation

    npm install @nestjs-steroids/async-context
    yarn add @nestjs-steroids/async-context
    pnpm install @nestjs-steroids/async-context

    Usage

    The first step is to register AsyncContext inside interceptor (or middleware)

    src/async-context.interceptor.ts

    import { randomUUID } from 'crypto'
    import {
      Injectable,
      NestInterceptor,
      ExecutionContext,
      CallHandler
    } from '@nestjs/common'
    import { AsyncContext } from '@nestjs-steroids/async-context'
    import { Observable } from 'rxjs'
    
    @Injectable()
    export class AsyncContextInterceptor implements NestInterceptor {
      constructor (private readonly ac: AsyncContext<string, any>) {}
    
      intercept (context: ExecutionContext, next: CallHandler): Observable<any> {
        this.ac.register() // Important to call .register or .registerCallback (good for middleware)
        this.ac.set('traceId', randomUUID()) // Setting default value traceId
        return next.handle()
      }
    }

    The second step is to register AsyncContextModule and interceptor inside main module

    src/app.module.ts

    import { APP_INTERCEPTOR } from '@nestjs/core';
    import { Module } from '@nestjs/common';
    import { AsyncContextModule } from '@nestjs-steroids/async-context';
    import { AsyncContextInterceptor } from './async-context.interceptor';
    
    @Module({
      imports: [
        AsyncContextModule.forRoot()
      ],
      providers: [
        {
          provide: APP_INTERCEPTOR,
          useClass: AsyncContextInterceptor,
        },
      ],
    })
    export class AppModule {}

    The last step is to inject AsyncContext inside controller or service and use it

    src/app.controller.ts

    import { Controller, Get, Logger } from '@nestjs/common'
    import { AppService } from './app.service'
    import { AsyncContext } from '@nestjs-steroids/async-context'
    
    @Controller()
    export class AppController {
      constructor (
        private readonly appService: AppService,
        private readonly asyncContext: AsyncContext<string, string>,
        private readonly logger: Logger
      ) {}
    
      @Get()
      getHello (): string {
        this.logger.log('AppController.getHello', this.asyncContext.get('traceId'))
        process.nextTick(() => {
          this.logger.log(
            'AppController.getHello -> nextTick',
            this.asyncContext.get('traceId')
          )
          setTimeout(() => {
            this.logger.log(
              'AppController.getHello -> nextTick -> setTimeout',
              this.asyncContext.get('traceId')
            )
          }, 0)
        })
        return this.appService.getHello()
      }
    }

    Output example

    [Nest] 141168  - 02/01/2022, 11:33:11 PM     LOG [NestFactory] Starting Nest application...
    [Nest] 141168  - 02/01/2022, 11:33:11 PM     LOG [InstanceLoader] AsyncContextModule dependencies initialized +47ms
    [Nest] 141168  - 02/01/2022, 11:33:11 PM     LOG [InstanceLoader] AppModule dependencies initialized +1ms
    [Nest] 141168  - 02/01/2022, 11:33:11 PM     LOG [RoutesResolver] AppController {/}: +12ms
    [Nest] 141168  - 02/01/2022, 11:33:11 PM     LOG [RouterExplorer] Mapped {/, GET} route +7ms
    [Nest] 141168  - 02/01/2022, 11:33:11 PM     LOG [NestApplication] Nest application successfully started +5ms
    [Nest] 141168  - 02/01/2022, 11:33:13 PM     LOG [7398d3ad-c246-4650-8dd0-f8f29238bdd7] AppController.getHello
    [Nest] 141168  - 02/01/2022, 11:33:13 PM     LOG [7398d3ad-c246-4650-8dd0-f8f29238bdd7] AppController.getHello -> nextTick
    [Nest] 141168  - 02/01/2022, 11:33:13 PM     LOG [7398d3ad-c246-4650-8dd0-f8f29238bdd7] AppController.getHello -> nextTick -> setTimeout
    

    API

    AsyncContext almost identical to native Map object

    class AsyncContext {
      // Clear all values from storage
      clear(): void;
    
      // Delete value by key from storage
      delete(key: K): boolean;
    
      // Iterate over storage
      forEach(callbackfn: (value: V, key: K, map: Map<K, V>) => void, thisArg?: any): void;
    
      // Get value from storage by key
      get(key: K): V | undefined;
    
      // Check if key exists in storage
      has(key: K): boolean;
    
      // Set value by key in storage
      set(key: K, value: V): this;
    
      // Get number of keys that stored in storage
      get size: number;
    
      // Register context, it's better to use this method inside the interceptor
      register(): void
    
      // Register context for a callback, it's better to use this inside the middleware
      registerCallback<R, TArgs extends any[]>(callback: (...args: TArgs) => R, ...args: TArgs): R
    
      // Unregister context
      unregister(): void
    }

    AsyncContextModule

    interface AsyncContextModuleOptions {
      // Should register this module as global, default: true
      isGlobal?: boolean
    
      // In case if you need to provide custom value AsyncLocalStorage
      alsInstance?: AsyncLocalStorage<any>
    }
    
    class AsyncContextModule {
      static forRoot (options?: AsyncContextModuleOptions): DynamicModule
    }

    Migration guide from V1

    You need to replace AsyncHooksModule by AsyncContextModule.forRoot()

    License

    MIT

    Visit original content creator repository
    https://github.com/nestjs-steroids/async-context

  • imperia

    Imperia

    Imperia is a work-in-progress experiment with imperative programming in Lean. At present, the focus is an alternative do notation (using the keyword μdo) that supports non-monadic types. However, the implementation is not complete, so it is currently lacking many features of the standard do notation (e.g., try, for, mut).

    Example

    The standard approach to writing imperative programs in Lean is to use do notation. The notation is very elegant, but it only supports monadic types.

    To demonstrate how this can be a problem, consider Lean core’s ParserFn, which is the type of primitive parser functions used to parse Lean. ParserFn is not a monad for performance reasons. This means that Lean parser functions can neither use do notation in their code or follow a Paersec-like functional style. Instead, they must be written in a very verbose and inelegant manner that carefully tracks the state and context. A simple example is charLitFnAux in core:

    /--
    Parses the part of a character literal
    after the initial quote (e.g., `a'` of `'a'`).
    
    `startPos` is the position of the initial quote.
    -/
    def charLitFnAux (startPos : String.Pos) : ParserFn := fun c s =>
      let input := c.input
      let i     := s.pos
      if h : input.atEnd i then s.mkEOIError
      else
        let curr := input.get' i h
        let s    := s.setPos (input.next' i h)
        let s    := if curr == '\\' then quotedCharFn c s else s
        if s.hasError then s
        else
          let i    := s.pos
          let curr := input.get i
          let s    := s.setPos (input.next i)
          if curr == '\'' then mkNodeToken charLitKind startPos c s
          else s.mkUnexpectedError "missing end of character literal"

    Imperia’s μdo notation provides an alternative elaboration of the same do-elements of the standard do notation, but with support for types like ParserFn. With μdo, charLitFnAux can be written in Parsec-like functional style:

    def charLitFnAux (startPos : String.Pos) : ParserFn := μdo
      let curr ← anyChar
      μdo if curr == '\\' then quotedCharFn
      guardError
      let curr ← anyCharUnchecked
      if curr == '\'' then
        mkNodeToken charLitKind startPos
      else
        raise "missing end of character literal"

    While it may look better, recall that ParserFn was not a monad for a reason — performance! Fortunately, Imperia’s approach manages to maintain the same IR and even the same simp normal forms as the original implementation.

    If this has piqued your curiosity, ImperiaTests/parser.lean contains the nitty-gritty details of how this Parsec-like ParserFn function is implemented with Imperia.

    Visit original content creator repository
    https://github.com/tydeu/imperia

  • AI-Sleep

    AI-Driven Beacon Sleep

    Overview

    This is a VERY VERY basic implementation of taking a data set to train a model to predict if a sleep setting will be detected by an EDR. This uses a supervised learning model that uses fake data based on detection capability for long term beacon detection. THIS IS NOT MEANT TO BE USED IN A OP.

    Components

    1. Data Generation (data-creation/generate.py)

    The generate.py script is responsible for generating synthetic data that simulates different operating system versions, EDR types, network conditions, and time of day. This data is used to train and test the AI model.

    • OS Versions: Windows 10, Windows 11
    • EDR Types: Includes popular EDR solutions like Crowdstrike Falcon, SentinelOne, and others.
    • Traffic Volumes: Low, Medium, High
    • Beacon Types: HTTP, HTTPS, DNS, SMB
    • Time of Day: Simulated as an hour of the day (0-23)
    • Detection Outcomes: Detected, Undetected

    The script generates a CSV file (improved_generated_test.csv) with 8000 rows of data, each containing a combination of the above parameters and a detection outcome.

    2. Process And Output Model (scripts/process.py)

    The process.py script is responsible for loading, preprocessing, and training a machine learning model using the synthetic data generated. It uses a GradientBoostingClassifier to predict detection outcomes.

    • Data Loading: Reads data from a CSV file.
    • Preprocessing: Encodes categorical features (OS Version, EDR Type, Network Load, Beacon Type) and converts string-based numerical columns (Jitter, Initial Sleep Time) to float. It also encodes the ‘Detection Outcome’ as a binary variable.
    • Data Splitting: Splits the data into training and test sets.
    • Model Training: Trains a GradientBoosting model on the training data.
    • Evaluation: Evaluates the model’s accuracy and prints a classification report.
    • Model Saving: Saves the trained model to a file (trained_model.joblib).

    3. Prediction Model (aggressor-script/prediction.py)

    The prediction.py script loads a pre-trained model to predict the detection outcome based on input parameters. It preprocesses the input data, encodes categorical variables, and uses the model to make predictions.

    • Preprocessing: Encodes OS version, EDR type, network load, beacon type, and considers the time of day.
    • Prediction: Outputs whether the sleep setting is “Detected” or “Undetected”.

    4. aggressor-script Integration (aggressor-script/sleepinNstuff.cna)

    The aggressor-script scripts (aggressor-script/sleepinNstuff.cna) integrate the AI model into a security tool, allowing it to execute AI-generated sleep settings.

    • aggressor-script/sleepinNstuff.cna: Executes the Python script to generate sleep and jitter settings that will return “Undetected”, which are then applied to a beacon session.

    Usage

    1. Data Generation: Run generate.py to create the synthetic dataset.

      python generate.py
    2. Process Data and Output Model: Run process.py to create the model.

      python scripts/process.py path/to/your_data.csv
    3. Model Prediction: Use prediction.py to predict detection outcomes based on input parameters.

      python aggressor-script/prediction.py <OS Version> <EDR Type> <Beacon Type>
    4. Aggressor Execution: Use the aggressor-script scripts to integrate the AI model into your CS and apply the generated sleep settings.

    Disclaimer

    This project is a proof of concept and uses synthetic data. The AI model is not trained on real-world data and should not be used in production environments.

    Inspiration

    0xtriboulet’s talk that integrated ai facial recognition to a loader.

    Visit original content creator repository
    https://github.com/0xflagplz/AI-Sleep

  • olist-ecommerce-analytics-api

    Olist Ecommerce Analytics API

    A RESTful API to expose big data analytics on Olist E-Commerce data. Big data analytics were processed using Apache hadoop deployed on Azure. This module is only responsible for exposing the results which are stored in Azure Blob Storage. This was done as a part of a MSc module project.

    High-level architecture diagram

    Repository contains only the Analytics API module

    alt text

    What’s included:

    Table of Content

    Quick Start

    After setting up your local DEV environment, you can clone this repository and run the solution. Make sure all the other interconnected services are running on the cloud.

    Prerequisites

    You’ll need the following tools:

    Development Environment Setup

    First clone this repository locally.

    • Install all of the the prerequisite tools mentioned above.

    Build and run from source

    With Visual studio: Open up the solutions using Visual studio.

    • Restore solution nuget packages.
    • Rebuild solution once.
    • Run the solution.
    • Local swagger URL here.

    License

    Licensed under the MIT license.

    Visit original content creator repository https://github.com/gayankanishka/olist-ecommerce-analytics-api