Blog

  • ha-grohe_smarthome

    Home Assistant – Grohe Smarthome (Sense and Blue)

    hacs_badge Integration Usage

    Grohe Smarthome (Sense and Blue) integration for Home Assistant

    This is an integration for all the Grohe Smarthome devices into Home Assistant. Namely, these are:

    • Grohe Sense (small leak sensor)
    • Grohe Sense Guard (main water pipe sensor/breaker)
    • Grohe Blue Home (water filter with carbonation)
    • Grohe Blue Professional (water filter with carbonation)

    Disclaimer

    The authors of this integration are not affiliated with Grohe or the Grohe app in any way. The project depends on the undocumented and unofficial Grohe API. If Grohe decides to change the API in any way, this could break the integration. Even tough the integration was tested on several devices without problems, the authors are not liable for any potential issues, malfunctions or damages arising from using this integration. Use at your own risk!

    Getting started

    If you’re new, take a look at the Installation Guide here: Getting Started

    Other documentation for available actions, device sensors, notifications, etc. can be found in the wiki

    Remarks on the “API”

    I have not seen any documentation from Grohe on the API this integration is using, so likely it was only intended for their app. Breaking changes have happened previously, and can easily happen again. I try to always keep the integration updated to their latest API.

    The API returns much more detailed data than is exposed via these sensors. For withdrawals, it returns an exact start- and end time for each withdrawal, as well as volume withdrawn. It seems to store data since the water meter was installed, so you can extract a lot of historic data (but then polling gets a bit slow). I’m not aware of any good way to expose time series data like this in home assistant (suddenly I learn that 2 liters was withdrawn 5 minutes ago, and 5 liters was withdrawn 2 minutes ago). If anyone has any good ideas/pointers, that’d be appreciated.

    Credits

    Thanks to:

    • gkreitz for the initial implementation of the Grohe Sense
    • rama1981 for reaching out and going through the trial and error for the Grohe Blue Professional.
    • daxyorg for going through the trial and error of the refactored version and testing with Grohe Blue Home.
    • windkh from whom I’ve token a lot of the notification types available.
    • FlorianSW for the initial protocol understanding
    Visit original content creator repository https://github.com/Flo-Schilli/ha-grohe_smarthome
  • uptime-monitor-dashboard

    Laravel Logo

    Build Status Total Downloads Latest Stable Version License

    About Laravel

    Laravel is a web application framework with expressive, elegant syntax. We believe development must be an enjoyable and creative experience to be truly fulfilling. Laravel takes the pain out of development by easing common tasks used in many web projects, such as:

    Laravel is accessible, powerful, and provides tools required for large, robust applications.

    Learning Laravel

    Laravel has the most extensive and thorough documentation and video tutorial library of all modern web application frameworks, making it a breeze to get started with the framework.

    You may also try the Laravel Bootcamp, where you will be guided through building a modern Laravel application from scratch.

    If you don’t feel like reading, Laracasts can help. Laracasts contains over 2000 video tutorials on a range of topics including Laravel, modern PHP, unit testing, and JavaScript. Boost your skills by digging into our comprehensive video library.

    Laravel Sponsors

    We would like to extend our thanks to the following sponsors for funding Laravel development. If you are interested in becoming a sponsor, please visit the Laravel Patreon page.

    Premium Partners

    Contributing

    Thank you for considering contributing to the Laravel framework! The contribution guide can be found in the Laravel documentation.

    Code of Conduct

    In order to ensure that the Laravel community is welcoming to all, please review and abide by the Code of Conduct.

    Security Vulnerabilities

    If you discover a security vulnerability within Laravel, please send an e-mail to Taylor Otwell via taylor@laravel.com. All security vulnerabilities will be promptly addressed.

    License

    The Laravel framework is open-sourced software licensed under the MIT license.

    Visit original content creator repository https://github.com/Hnooz/uptime-monitor-dashboard
  • Heart-Disease-Prediction

    Heart Disease Prediction


    Explore the Jupyter Notebook

    Try the Classification Website

    Table of Contents

    Objective

    WARNING: Please note that this project is intended as an illustrative example of the potential application of machine learning in assisting medical professionals with heart disease diagnosis. The information and results presented here (or on the accompanying website) do not constitute medical advice in any form.

    Heart disease is a prevalent health condition that requires accurate and timely diagnosis for effective treatment. This project aims to develop a machine learning model to assist doctors in diagnosing heart disease accurately and efficiently. By leveraging statistical learning algorithms and a comprehensive dataset, the model can analyze various patient factors and provide predictions regarding the probability of heart disease. The implementation of diagnostic machine learning models like this one offers several potential benefits, including improved diagnostic accuracy, reduced burden on medical professionals, and early detection of disease. Furthermore, the project promotes data-driven medicine and contributes to ongoing efforts in machine learning-based medical diagnosis. By providing an additional tool for risk assessment and decision-making, I hope that models like this one can enhance patient outcomes and advance our understanding of heart disease.

    Usage

    Animated GIF Please note that the Heart Disease Prediction Website is intended for demonstration and educational purposes only and should not substitute professional medical advice or diagnosis.
    1. A medical professional could collect and enter the necessary patient information (such as age, sex, chest pain type, resting blood pressure, etc.) into the website
    2. Once all the required data has been entered, click the “Predict” button to initiate the prediction process.
    3. A medical professional could interpret the prediction result alongside other diagnostic information and medical expertise to make informed decisions and provide appropriate care for the patient.

    Dataset

    Link to dataset

    Attribute Information

    1. Age: age of the patient [years]
    2. Sex: sex of the patient [M: Male, F: Female]
    3. ChestPainType: chest pain type [TA: Typical Angina, ATA: Atypical Angina, NAP: Non-Anginal Pain, ASY: Asymptomatic]
    4. RestingBP: resting blood pressure [mm Hg]
    5. Cholesterol: serum cholesterol [mm/dl]
    6. FastingBS: fasting blood sugar [1: if FastingBS > 120 mg/dl, 0: otherwise]
    7. RestingECG: resting electrocardiogram results [Normal: Normal, ST: having ST-T wave abnormality (T wave inversions and/or ST elevation or depression of > 0.05 mV), LVH: showing probable or definite left ventricular hypertrophy by Estes’ criteria]
    8. MaxHR: maximum heart rate achieved [Numeric value between 60 and 202]
    9. ExerciseAngina: exercise-induced angina [Y: Yes, N: No]
    10. Oldpeak: oldpeak = ST [Numeric value measured in depression]
    11. ST_Slope: the slope of the peak exercise ST segment [Up: upsloping, Flat: flat, Down: downsloping]
    12. HeartDisease: output class [1: heart disease, 0: Normal]

    Source

    This dataset was created by combining different datasets already available independently but not combined before. In this dataset, 5 heart datasets are combined over 11 common features which makes it the largest heart disease dataset available so far for research purposes. The five datasets used for its curation are:

    • Cleveland: 303 observations
    • Hungarian: 294 observations
    • Switzerland: 123 observations
    • Long Beach VA: 200 observations
    • Stalog (Heart) Data Set: 270 observations

    Total: 1190 observations
    Duplicated: 272 observations

    Final dataset: 918 observations

    Creators:

    1. Hungarian Institute of Cardiology. Budapest: Andras Janosi, M.D.
    2. University Hospital, Zurich, Switzerland: William Steinbrunn, M.D.
    3. University Hospital, Basel, Switzerland: Matthias Pfisterer, M.D.
    4. V.A. Medical Center, Long Beach and Cleveland Clinic Foundation: Robert Detrano, M.D., Ph.D.

    Donor to UC Irvine Machine Learning Repository:
    David W. Aha (aha ‘@’ ics.uci.edu) (714) 856-8779

    The datasets from the above sources were combined into the dataset I used by Kaggle user fedesoriano.

    Data source Citation:

    fedesoriano. (September 2021). Heart Failure Prediction Dataset. Retrieved [May 22, 2023] from https://www.kaggle.com/fedesoriano/heart-failure-prediction.

    Exploratory Data Analysis

    Questions I Asked About the Data:

      How many positive and negative examples are there of the target variable? target_freq

      The dataset is close to balanced, so there is no need to impliment techniques to improve classifaction of infrequent categories like Synthetic Minority Over-sampling.

      How are continuous variables distributed (in particular, are they normally distributed)? continuous_distribution qq_plots

      Key Takeaways:

      1. Upon visually examining the distribution of age, resting blood pressure, and maximum heart rate, they appeared to resemble a normal distribution. However, the application of Q-Q plots indicated deviations from Gaussian distribution. Consequently, I conducted Shapiro-Wilk tests on each of these variables, which confirmed their non-normal distribution.
      2. Notably, a considerable number of cholesterol values were assigned as 0 to represent null values.

      Leveraging These Insights:

      1. To address the departure from normality, I opted to employ the StandardScaler() function from the sklearn library. This transformation aimed to bring the data points closer to a normal distribution.
      2. Initially, when constructing the baseline models, I retained the original cholesterol data without any modifications. However, to overcome the limitation imposed by the null cholesterol values, I employed a series of techniques which aim to replace the null values with numbers from which models can generate meaningful predictions.
      How do continuous variables change in conjunction with the target variable? continuous_target

      A visual inspection indicates that age, maximum heart rate and oldpeak are most different in Heart Disease positive vs negative. This was later statistically confirmed with an ANOVA

      How many examples are there of each categorical variable? continuous_target
      How does each categorical variable change in conjunction with the target variable?
      categorical_target

    Feature Selection With Inferential Statistics

    I used inferential statistics to determine the importance of the dataset’s features. If I found that a feature has no significant impact on the target variable, then it would be helpful to try models which discard that variable. Removing an insignificant vairbale would reduce noise in the data, ideally lowering model overfitting and improving classification accuracy. For continuous features, I conducted an ANOVA, and for categorical features, I used a Chi-Squared Test.

    ANOVA

    Analysis of Variance (ANOVA) is a method from inferential statistics that aims to determine if there is a statistically significant difference between the means of two (or more) groups. This makes it a strong candidate for determining importance of continuous features in predicting a categorical output. I used a One-Way ANOVA to test the importance of each continuous feature by checking whether presence of heart disease had a statistically significant effect on the feature’s mean.

    ANOVA Results

    I found that there was a statistically significant difference (p<0.05) for each continuous feature. This led me to decide to keep all continuous features as part of my classification models.
    ANOVA_results

    Chi-Squared Test

    The Chi-Squared test is a statistical hypothesis test that is used to determine whether there is a significant association between two categorical variables. It compares the observed frequencies in each category of a contingency table with the frequencies that would be expected if the variables were independent. In the context of feature selection for my machine learning models, I used the Chi-Squared test to identify the categorical features that are most significantly associated with the target variable.

    Chi-Squared Test Results

    Like the continuous features, I found a statistically significant difference in heart disease (p<0.05) according to each categorical feature. This led me to decide to keep all categorical features as part of my classification models.
    chi_sq_results

    Data Imputation Techniques for Null Value Replacement

    As I discovered during Exploratory Data Analysis, the dataset has 172 samples with null values for Cholesterol (which were initially set to 0). I explored various data imputation techniques in an attempt to extract meaningful training data from such samples.

    Initially, simple imputation strategies were deployed, namely: mean, median, and mode imputation. A noteworthy improvement in model performance was observed compared to models trained on the original dataset where null values were replaced by a default value of zero. Among these initial imputation techniques, mean imputation was found to deliver the best results for most machine learning models.

    Building upon these initial findings, I applied a sophisticated imputation method: applying regression analysis to estimate the missing values. The regression techniques applied included Linear Regression, Ridge Regression, Lasso Regression, Random Forest Regression, Support Vector Regression, and Regression using Deep Learning. Each of these regression-based imputation techniques displayed a similar level of performance in terms of RMSE and MAE.

    The performance of the regression models was found to be not as satisfactory as initially hypothesized, often falling short of the results obtained with mean imputation. Despite this, it was observed that for Random Forest Classifiaction models, the regression-based methods exhibited strong results in terms of precision and specificity metrics. Particularly, Linear Regression and Ridge Regression-based imputation strategies performed well in these areas.

    Classification Models

    A key component of this project involved the implementation and performance evaluation of a variety of classification models. The chosen models were tested on the datasets prepared as described in the “Data Imputation Techniques for Null Value Replacement” section. All the datasets utilized in this phase had continuous features standardized and normalized to ensure a uniform scale and improved model performance.

    1. Logistic Regression
    2. Random Forest Classifier
    3. Support Vector Machine Classifier
    4. Gaussian Naive Bayes Classifier
    5. Bernoulli Naive Bayes Classifier
    6. XGBoost Classifier
    7. Neural Network (of various architectures)

    For the neural network models, an expansive exploration of hyperparameter variations was conducted using cross-validation. These architectures ranged from those with a single hidden layer to those with three hidden layers, with the number of neurons per layer varying from 32 to 128.

    Each of these models was trained on 80% of the data, and tested on 20%. Accuracy, Precision, Recall, F1-Score and Specificity metrics were tracked.

    Results and Model Performance Evaluation

    A thorough analysis of over 80 models was conducted in this project, with the evaluation criteria based on several metrics including accuracy, precision, recall (sensitivity), F1-score, and specificity. Out of all the models evaluated, two demonstrated superior performance in their respective contexts.

    1. Deep Learning Model:
    2. The top performing model by Accuracy, Recall, and F1-Score was a deep learning model trained on data where missing cholesterol values were imputed using the mean of the available values. Performance metrics can be found in the below figures and in Tables 1 and 2. To accompany the single-number metrics, PDFs were also constructed to quantify the uncertainty in this model’s sensitivity/recall and specificity. I wrote a comprehensive report detailing the generation of Sensitivity and Specificity PDFs and Credible Intervals which can be found in the repository, or by clicking this link

      Deep Learning Classifier Confusion Matrix Deep Learning Classifier Sensitivity and Specificity PDFs

      Table 1: Deep Learning Model Performance
      Accuracy Precision Recall F1-Score Specificity
      91.30% 91.96% 93.64% 92.79% 87.84%

      Table 2: Deep Learning Model Sensitivity and Specificity CI
      95% CI Minimum 95% CI Maximum
      Sensitivity/Recall 88.5% 96.5%
      Specificity 80.5% 93.5%

    3. Random Forest Classifier:
    4. The top performing model by Precision and Specificity was a Random Forest Classifier trained on data with missing cholesterol values imputed using Ridge Regression. Performance metrics can be found in the below figures and in Tables 2 and 3.

      Random Forest Classifier Confusion Matrix Random Forest Classifier Sensitivity and Specificity PDFs

      Table 3: Random Forest Classifier Performance
      Accuracy Precision Recall F1-Score Specificity
      89.67% 94.34% 88.50% 91.32% 91.55%
      Table 4: Random Forest Classifier Sensitivity and Specificity CI
      95% CI Minimum 95% CI Maximum
      Sensitivity/Recall 82.5% 92.5%
      Specificity 84.5% 96.5%

    Visit original content creator repository https://github.com/nripstein/Heart-Disease-Prediction
  • deno_ieee754

    deno_ieee754

    tag ci license

    Parse IEEE754 floating point numbers for Deno.

    Usage

    import * as ieee754 from "https://deno.land/x/ieee754/mod.ts";
    // or
    // import read from "https://deno.land/x/ieee754/read.ts";
    // import write from "https://deno.land/x/ieee754/write.ts";
    
    const buf: Uint8Array = Uint8Array.of(0x42, 0x29, 0xae, 0x14);
    const f64 = ieee754.read(buf, 0, false, 52, 8);
    console.log(f64); // 42.42
    
    const EPSILON = 0.00001;
    const f32 = ieee754.read(buf, 0, false, 23, 4);
    console.log(Math.abs(f32 - 42.42) < EPSILON); // true

    Use with number array:

    const arr: number[] = [1, 2, 3]; // a number array
    const buf = Uint8Array.from(arr); // convert to `Uint8Array`
    const num = ieee754.read(buf, 0, false, 23, 4);

    APIs

    read:

    /**
     * Read IEEE754 floating point numbers from a array.
     * @param buffer the buffer
     * @param offset offset into the buffer
     * @param isLE is little endian?
     * @param mLen mantissa length
     * @param nBytes number of bytes
     */
    function read(
      buffer: Uint8Array,
      offset: number,
      isLE: boolean,
      mLen: number,
      nBytes: number,
    ): number;

    write:

    /**
     * Write IEEE754 floating point numbers to a array.
     * @param buffer the buffer
     * @param value value to set
     * @param offset offset into the buffer
     * @param isLE is little endian?
     * @param mLen mantissa length
     * @param nBytes number of bytes
     */
    function write(
      buffer: Uint8Array,
      value: number,
      offset: number,
      isLE: boolean,
      mLen: number,
      nBytes: number,
    ): void;

    License

    deno_ieee754 is released under the MIT License. See the bundled LICENSE file for details.

    Visit original content creator repository https://github.com/justjavac/deno_ieee754
  • deno_ieee754

    deno_ieee754

    tag ci license

    Parse IEEE754 floating point numbers for Deno.

    Usage

    import * as ieee754 from "https://deno.land/x/ieee754/mod.ts";
    // or
    // import read from "https://deno.land/x/ieee754/read.ts";
    // import write from "https://deno.land/x/ieee754/write.ts";
    
    const buf: Uint8Array = Uint8Array.of(0x42, 0x29, 0xae, 0x14);
    const f64 = ieee754.read(buf, 0, false, 52, 8);
    console.log(f64); // 42.42
    
    const EPSILON = 0.00001;
    const f32 = ieee754.read(buf, 0, false, 23, 4);
    console.log(Math.abs(f32 - 42.42) < EPSILON); // true

    Use with number array:

    const arr: number[] = [1, 2, 3]; // a number array
    const buf = Uint8Array.from(arr); // convert to `Uint8Array`
    const num = ieee754.read(buf, 0, false, 23, 4);

    APIs

    read:

    /**
     * Read IEEE754 floating point numbers from a array.
     * @param buffer the buffer
     * @param offset offset into the buffer
     * @param isLE is little endian?
     * @param mLen mantissa length
     * @param nBytes number of bytes
     */
    function read(
      buffer: Uint8Array,
      offset: number,
      isLE: boolean,
      mLen: number,
      nBytes: number,
    ): number;

    write:

    /**
     * Write IEEE754 floating point numbers to a array.
     * @param buffer the buffer
     * @param value value to set
     * @param offset offset into the buffer
     * @param isLE is little endian?
     * @param mLen mantissa length
     * @param nBytes number of bytes
     */
    function write(
      buffer: Uint8Array,
      value: number,
      offset: number,
      isLE: boolean,
      mLen: number,
      nBytes: number,
    ): void;

    License

    deno_ieee754 is released under the MIT License. See the bundled LICENSE file for details.

    Visit original content creator repository https://github.com/justjavac/deno_ieee754
  • mai-joel_maximilian-bachelor_kolloquium

    Visit original content creator repository
    https://github.com/mai-space/mai-joel_maximilian-bachelor_kolloquium

  • rl_lib

    rl_lib

    Motivation: I have always thought that the only way to truely test if you understand a concept is to see if you can build it. As such all these these algorithms are implemented studying the relevant papers and coded to test my understanding.

    What I cannot create, I do not understand” – Richard Feynman

    Algorithms

    DQN

    Policy Gradient

    Tabular Solutions

    These were mainly referenced from a really good lecture series by Colin Skow on youtube [link]. A large part was also found in the Deep Reinforcement Learning Udacity course.

    • Bellman Equation
    • Dynamic Programming
    • Q learning

    Associated Articles

    Results

    DQN Pong

    • Converged to an average of 17.56 after 1300 Episodes.
    • Code and results can be found under DQN/7. Vanilla DQN Atari.ipynb

    Drawing

    DDPG Continuous

    • Converged to ~ -270 after a 100 episodes
    • Code and results can be found under Policy Gradient/4. DDPG.ipynb.ipynb

    Drawing

    PPO discrete

    • Solved in 409 episodes
    • Code and results can be found under Policy Gradient/5. PPO.ipynb

    Drawing

    PPO Atari – with Baseline Enhancements

    • Code and results can be found under PPO/

    Drawing

    Todo

    • Curiousity Driven Exploration
    • HER (Hindsight Experience Replay)
    • Recurrent networks in PPO and DDPG

    Credits

    Whilst I tried to code everything directly from the papers, it wasn’t always easy to understand what I was doing wrong when the algorithm just wouldn’t train or I got runtime errors. As such I used the following repositories as references.

    Visit original content creator repository https://github.com/DarylRodrigo/rl_lib
  • Image-filters

    Image-filters 🏙️🏙️

    Script to apply some filters to BMP images. Written entirely in c ++.

    Run it🚀

    Just clone the project using git clone in the directory where you want to use the project. Once the project is cloned, the most important files you will have will be the “makefile” file and the code itself. In addition, certain sample images are also provided in an additional folder.

    To run it you must have an input directory (the one we give as an example is valid) and an output directory (it can be the same as the input directory). Once this is cleared, simply type make in the terminal, in the root directory of the project. If there are no errors, continue entering the following command:

    ./img-par filter input_directory output_directory

    If no errors appear, you should see the images from the input directory in the output directory, but with the specified filter applied.

    Warning: This project is developed and thought to be compiled in a Linux environment. You must have the necessary environment to compile c++ files (using the g++ compiler). Other libraries may need installation, especially in case we want to work on windows

    Posible filters📷

    At the moment, only one of the following filters can be used:

    • Copy: The equivalent of not using any filter. The files will simply be copied from the input directory to the output directory. For example, when copying the following image of the tiger, we will obtain:

    Tiger image

    • Gauss: In image processing, a Gaussian blur (also known as Gaussian smoothing) is the result of blurring an image by a Gaussian function. See wiki

    Tiger image

    • Sobel: The Sobel operator, sometimes called the Sobel–Feldman operator or Sobel filter, is used in image processing and computer vision, particularly within edge detection algorithms where it creates an image emphasising edges. See wiki

    Tiger image

    To use one of these filters, just replace the word “filter” of the command from the previous section with the word of the corresponding filter.

    Want to collaborate?🙋🏻

    Feel free to try adding new filters, or improving and optimizing existing code. The sobel filter works a bit bad on certain edges of some images.

    Visit original content creator repository https://github.com/enrique-lozano/Image-filters
  • listenbrainz-ts

    listenbrainz

    TypeScript library to access the ListenBrainz API.

    You can use it to submit your listening history to your ListenBrainz account with your own apps.

    For end users, there is also the elbisaur CLI, which exposes many features of the library to the command line.
    In addition to showcasing the usage of the API client, it also has advanced features to manage listens.

    Features

    • Submits listens and playing now notifications
    • Browses your listening history and allows you to delete listens
    • Handles ListenBrainz authorization and API errors
    • Adheres to rate limits and optionally retries failed requests
    • Provides generic GET and POST methods for (yet) unsupported API endpoints
    • Includes additional parsers to extract listens from text formats
    • Ships with type definitions and inline documentation

    As this library only makes use of web standards, it is also compatible with modern browsers (after transpilation to JavaScript).

    Usage

    In order to submit listens, you have to specify a user token which you can obtain from your ListenBrainz settings page.

    The following example instantiates a ListenBrainz client with a token from an environment variable and submits a playing now notification for a track:

    import { ListenBrainzClient } from "@kellnerd/listenbrainz";
    
    const client = new ListenBrainzClient({ userToken: Deno.env.get("LB_TOKEN") });
    await client.playingNow({ artist_name: "John Doe", track_name: "Love Song" });

    Parsers

    All listen parsers accept text input and generate Listen objects as their output.
    These objects can then be used together with the ListenBrainz API and the LB client.
    None of the parsers performs any filtering of listens, so you have to detect potential duplicates and skipped listens yourself.

    The parsers do not include any logic to access files to make them platform independent.
    You can pass them the content from a HTTP response body or from a file, for example.

    The following parsers are available in the listenbrainz/parser/* submodules:

    • JSON: Accepts a JSON-serialized Listen object (as shown by the LB “Inspect listen” dialog) or an array of Listen objects as input.
    • JSONL: Multiple JSON-serialized Listen objects, separated by line breaks (format of listens files from a LB listening history export).
    • MusicBrainz: Accepts a release from the MusicBrainz JSON API and creates listens for the selected tracks.
    • .scrobbler.log: TSV table document which is generated by some portable music players for later submission to Last.fm.
      • Parser currently accepts files with AUDIOSCROBBLER/1.1 header which are generated by players with Rockbox firmware, possibly also by others.
      • Automatically converts timestamps from your local timezone to UTC (as Rockbox players are usually not timezone-aware).
      • Skipped listens are marked with track_metadata.additional_info.skipped = true.
    • Spotify: JSON files from an Extended Streaming History download.
      • Calculates the correct listen (start) timestamp from stream end time and duration.
      • Makes use of the “offline” timestamp to detect extreme outliers where Spotify has only logged a stream as ended when the app was next opened.
      • Skipped listens can be detected by their track_metadata.additional_info attributes skipped, reason_end and a too short duration_ms.

    Visit original content creator repository
    https://github.com/kellnerd/listenbrainz-ts

  • SNU_2D_ProgrammingTools_IDE_Makefile


    {Project icon} This image failed to load. It may be due to the file not being reached, or a general error. Reload the page to fix a possible general error.

    By:

    {Developer name} This image failed to load. It may be due to the file not being reached, or a general error. Reload the page to fix a possible general error.

    Top

    README.md


    Read this article in a different language

    ar عربى zh-t 中國傳統的)en-us fr français de Deutsch EO Esperanto ja 日本語 ko-south 韓國語 pl polski ru русский es en español

    Translations in languages other than English are machine translated and are not yet accurate. No errors have been fixed yet as of March 21st 2021. Please report translation errors here. Make sure to backup your correction with sources and guide me, as I don’t know languages other than English well (I plan on getting a translator eventually) please cite wiktionary and other sources in your report. Failing to do so will result in a rejection of the correction being published.


    Index

    00.0 – Top

    00.1 – Title

    00.2 – Read this article in a different language

    00.3 – Index

    01.0 – Description

    02.0 – About

    03.0 – Wiki

    04.0 – Version history

    05.0 – Software status

    06.0 – Sponsor info

    07.0 – Contributers

    08.0 – Issues

    08.1 – Current issues

    08.2 – Past issues

    08.3 – Past pull requests

    08.4 – Active pull requests

    09.0 – Resources

    10.0 – Contributing

    11.0 – About README

    12.0 – README Version history

    13.0 – Footer

    13.1 – End of file


    <repo_description>


    About

    See above.


    Wiki

    Click/tap here to view this projects Wiki

    If the project has been forked, the Wiki was likely removed. Luckily, I include an embedded version. You can view it here.


    Sponsor info

    SponsorButton.png

    You can sponsor this project if you like, but please specify what you want to donate to. See the funds you can donate to here

    You can view other sponsor info here

    Try it out! The sponsor button is right up next to the watch/unwatch button.


    Version history

    Version history currently unavailable

    No other versions listed


    Software status

    All of my works are free some restrictions. DRM (Digital Restrictions Management) is not present in any of my works.

    DRM-free_label.en.svg

    This sticker is supported by the Free Software Foundation. I never intend to include DRM in my works.

    I am ussing the abbreviation “Digital Restrictions Management” instead of the more known “Digital Rights Management” as the common way of addressing it is false, there are no rights with DRM. The spelling “Digital Restrictions Management” is more accurate, and is supported by Richard M. Stallman (RMS) and the Free Software Foundation (FSF)

    This section is used to raise awareness for the problems with DRM, and also to protest it. DRM is defective by design and is a major threat to all computer users and software freedom.

    Image credit: defectivebydesign.org/drm-free/…


    Contributers

    Currently, I am the only contributer. Contributing is allowed, as long as you follow the rules of the CONTRIBUTING.md file.

      1. seanpm2001 – x commits (As of DoW, Month, DoM, Yr at ##:## a/pm)
      1. No other contributers.

    Issues

    Current issues

    • None at the moment

    • No other current issues

    If the repository has been forked, issues likely have been removed. Luckily I keep an archive of certain images here

    Read the privacy policy on issue archival here

    TL;DR

    I archive my own issues. Your issue won’t be archived unless you request it to be archived.

    Past issues

    • None at the moment

    • No other past issues

    If the repository has been forked, issues likely have been removed. Luckily I keep an archive of certain images here

    Read the privacy policy on issue archival here

    TL;DR

    I archive my own issues. Your issue won’t be archived unless you request it to be archived.

    Past pull requests

    • None at the moment

    • No other past pull requests

    If the repository has been forked, issues likely have been removed. Luckily I keep an archive of certain images here

    Read the privacy policy on issue archival here

    TL;DR

    I archive my own issues. Your issue won’t be archived unless you request it to be archived.

    Active pull requests

    • None at the moment

    • No other active pull requests

    If the repository has been forked, issues likely have been removed. Luckily I keep an archive of certain images here

    Read the privacy policy on issue archival here

    TL;DR

    I archive my own issues. Your issue won’t be archived unless you request it to be archived.


    Resources

    Here are some other resources for this project:

    Project language file

    Join the discussion on GitHub

    No other resources at the moment.


    Contributing

    Contributing is allowed for this project, as long as you follow the rules of the CONTRIBUTING.md file.

    Click/tap here to view the contributing rules for this project


    About README

    File type: Markdown (*.md)

    File version: 0.1 (Sunday, March 21st 2021 at 7:50 pm)

    Line count: 0,296


    README version history

    Version 0.1 (Sunday, March 21st 2021 at 7:50 pm)

    Changes:

    • Started the file
    • Added the title section
    • Added the index
    • Added the about section
    • Added the Wiki section
    • Added the version history section
    • Added the issues section.
    • Added the past issues section
    • Added the past pull requests section
    • Added the active pull requests section
    • Added the contributors section
    • Added the contributing section
    • Added the about README section
    • Added the README version history section
    • Added the resources section
    • Added a software status section, with a DRM free sticker and message
    • Added the sponsor info section
    • No other changes in version 0.1

    Version 1 (Coming soon)

    Changes:

    • Coming soon
    • No other changes in version 1

    Version 2 (Coming soon)

    Changes:

    • Coming soon
    • No other changes in version 2

    You have reached the end of the README file

    Back to top Exit

    EOF


    Visit original content creator repository https://github.com/seanpm2001/SNU_2D_ProgrammingTools_IDE_Makefile