Blog

  • SNU_2D_ProgrammingTools_IDE_Makefile


    {Project icon} This image failed to load. It may be due to the file not being reached, or a general error. Reload the page to fix a possible general error.

    By:

    {Developer name} This image failed to load. It may be due to the file not being reached, or a general error. Reload the page to fix a possible general error.

    Top

    README.md


    Read this article in a different language

    ar عربى zh-t 中國傳統的)en-us fr français de Deutsch EO Esperanto ja 日本語 ko-south 韓國語 pl polski ru русский es en español

    Translations in languages other than English are machine translated and are not yet accurate. No errors have been fixed yet as of March 21st 2021. Please report translation errors here. Make sure to backup your correction with sources and guide me, as I don’t know languages other than English well (I plan on getting a translator eventually) please cite wiktionary and other sources in your report. Failing to do so will result in a rejection of the correction being published.


    Index

    00.0 – Top

    00.1 – Title

    00.2 – Read this article in a different language

    00.3 – Index

    01.0 – Description

    02.0 – About

    03.0 – Wiki

    04.0 – Version history

    05.0 – Software status

    06.0 – Sponsor info

    07.0 – Contributers

    08.0 – Issues

    08.1 – Current issues

    08.2 – Past issues

    08.3 – Past pull requests

    08.4 – Active pull requests

    09.0 – Resources

    10.0 – Contributing

    11.0 – About README

    12.0 – README Version history

    13.0 – Footer

    13.1 – End of file


    <repo_description>


    About

    See above.


    Wiki

    Click/tap here to view this projects Wiki

    If the project has been forked, the Wiki was likely removed. Luckily, I include an embedded version. You can view it here.


    Sponsor info

    SponsorButton.png

    You can sponsor this project if you like, but please specify what you want to donate to. See the funds you can donate to here

    You can view other sponsor info here

    Try it out! The sponsor button is right up next to the watch/unwatch button.


    Version history

    Version history currently unavailable

    No other versions listed


    Software status

    All of my works are free some restrictions. DRM (Digital Restrictions Management) is not present in any of my works.

    DRM-free_label.en.svg

    This sticker is supported by the Free Software Foundation. I never intend to include DRM in my works.

    I am ussing the abbreviation “Digital Restrictions Management” instead of the more known “Digital Rights Management” as the common way of addressing it is false, there are no rights with DRM. The spelling “Digital Restrictions Management” is more accurate, and is supported by Richard M. Stallman (RMS) and the Free Software Foundation (FSF)

    This section is used to raise awareness for the problems with DRM, and also to protest it. DRM is defective by design and is a major threat to all computer users and software freedom.

    Image credit: defectivebydesign.org/drm-free/…


    Contributers

    Currently, I am the only contributer. Contributing is allowed, as long as you follow the rules of the CONTRIBUTING.md file.

      1. seanpm2001 – x commits (As of DoW, Month, DoM, Yr at ##:## a/pm)
      1. No other contributers.

    Issues

    Current issues

    • None at the moment

    • No other current issues

    If the repository has been forked, issues likely have been removed. Luckily I keep an archive of certain images here

    Read the privacy policy on issue archival here

    TL;DR

    I archive my own issues. Your issue won’t be archived unless you request it to be archived.

    Past issues

    • None at the moment

    • No other past issues

    If the repository has been forked, issues likely have been removed. Luckily I keep an archive of certain images here

    Read the privacy policy on issue archival here

    TL;DR

    I archive my own issues. Your issue won’t be archived unless you request it to be archived.

    Past pull requests

    • None at the moment

    • No other past pull requests

    If the repository has been forked, issues likely have been removed. Luckily I keep an archive of certain images here

    Read the privacy policy on issue archival here

    TL;DR

    I archive my own issues. Your issue won’t be archived unless you request it to be archived.

    Active pull requests

    • None at the moment

    • No other active pull requests

    If the repository has been forked, issues likely have been removed. Luckily I keep an archive of certain images here

    Read the privacy policy on issue archival here

    TL;DR

    I archive my own issues. Your issue won’t be archived unless you request it to be archived.


    Resources

    Here are some other resources for this project:

    Project language file

    Join the discussion on GitHub

    No other resources at the moment.


    Contributing

    Contributing is allowed for this project, as long as you follow the rules of the CONTRIBUTING.md file.

    Click/tap here to view the contributing rules for this project


    About README

    File type: Markdown (*.md)

    File version: 0.1 (Sunday, March 21st 2021 at 7:50 pm)

    Line count: 0,296


    README version history

    Version 0.1 (Sunday, March 21st 2021 at 7:50 pm)

    Changes:

    • Started the file
    • Added the title section
    • Added the index
    • Added the about section
    • Added the Wiki section
    • Added the version history section
    • Added the issues section.
    • Added the past issues section
    • Added the past pull requests section
    • Added the active pull requests section
    • Added the contributors section
    • Added the contributing section
    • Added the about README section
    • Added the README version history section
    • Added the resources section
    • Added a software status section, with a DRM free sticker and message
    • Added the sponsor info section
    • No other changes in version 0.1

    Version 1 (Coming soon)

    Changes:

    • Coming soon
    • No other changes in version 1

    Version 2 (Coming soon)

    Changes:

    • Coming soon
    • No other changes in version 2

    You have reached the end of the README file

    Back to top Exit

    EOF


    Visit original content creator repository
    https://github.com/seanpm2001/SNU_2D_ProgrammingTools_IDE_Makefile

  • Reflex

    Reflex

    Archived and no longer maintained! Use a way more power game engine DuckEngine instead.

    A simple Javascript game engine.

    MIT LICENSE

    Features

    1. Rigid Bodies.
    2. Basic rigid body physics.
    3. RigidBody custom event listener.
    4. StaticLights.
    5. Background loader with methods.
    6. Attaching different shapes to one main object.
    7. Sound player with event listeners and custom events.
    8. Proximity sounds.
    9. Different shapes such as rect, roundrect, circle, and sprite (img)
    10. Entity Management.
    11. Particles with images or colors and preset animations such as explosion and smoke.
    12. Text and Button UI.
    13. Basic Shadow.
    14. Great performance.
    15. Future Plans such as dynamic lighting.
    16. Often updates/patches.
    17. And way more. Look at the docs!

    Download

    Github

    1. Download the latest release from github.
    2. Unzip and copy to your project.
    3. Import Reflex as a module. (Help here)
    4. Done, Read The Docs.

    NPM

    1. Run npm install @ksplat/reflex.
    2. Import Reflex as a module. (Help here)
    3. Done, Read The Docs.

    CDN Module

    Development

    1. Import Reflex as a module. (Help here)
    2. Change the from to the CDN link.
    3. Done, Read The Docs.

    Production

    1. Import Reflex as a module. (Help here)
    2. Change the from to the CDN link.
    3. Done, Read The Docs.

    Help

    Importing

    1. Make sure your script has the attribute type=”module”.
    2. Import all from Reflex.
    3. Look at the example.

    Loop not starting

    1. Make sure you started Reflex.
    2. Make sure you passed in a valid function.
    3. If none of this helped, create a bug issue.

    Branches

    • main

      Master branch, production branch, merge to, do not commit

    • features

      Features to be merged to main on completion

    Visit original content creator repository
    https://github.com/art-emini/Reflex

  • Maintenance-App

    Maintenance App

    Welcome to the Maintenance App,This platform that enables faculty and staff at University of Jaffna to submit maintenance requests for campus buildings and facilities.With this app, users can quickly and easily create a complaint, which will be assigned to a work engineer for review and resolution. The app allows for efficient tracking of maintenance requests, ensures timely follow-up, and streamlines communication between the university and its community.

    Technologies Used

    This app is built using the MERN stack, which includes:

    • MongoDB: a NoSQL database for storing and managing data
    • Express.js: a Node.js framework for building web applications
    • React Native: a front-end JavaScript library for building user interfaces
    • Node.js: a JavaScript runtime environment for executing server-side code

    Features

    • User Authentication: Secure login system for users, work engineers, and supervisors
    • Complaint Submission: Users can create a complaint with details of the issue, and add images if necessary
    • Complaint Assignment: Work engineers can view all complaints and assign them to supervisors for review
    • Complaint Tracking: Supervisors can track the progress of assigned complaints, update their status, and add comments
    • Notifications: Automated email notifications for complaint submission, assignment, and resolution
    • Admin Panel: For managing users, work engineers, supervisors, and complaint categories

    Installation and Setup

    To get started with the Maintenance App, follow these steps:

    1. Clone the repository to your local machine
    2. Install dependencies using npm install
    3. Set up the environment variables
    4. Start the server using npm start

    For more detailed instructions, please refer to the installation guide.

    Contributors

    This app was developed by the Codewave team, which includes:

    Visit original content creator repository
    https://github.com/nadunchanna98/Maintenance-App

  • geese

    Integrative Methods of Analysis for Genetic Epidemiology

    geese: GEne-functional Evolution using SufficiEncy

    R-CMD-check

    This R package taps into statistical theory primarily developed in social networks. Using Exponential-Family Random Graph Models (ERGMs), geese provides a statistical framework for building Gene Functional Evolution Models using Sufficiency. For example, users can directly hypothesize whether Neofunctionalization or Subfunctionalization events were taking place in a phylogeny, without having to estimate the full transition Markov Matrix that is usually used.

    GEESE is computationally efficient, with C++ under the hood, allowing the analyses of either single trees (a GEESE) or multiple trees simultaneously (pooled model) in a Flock.

    This is a work in progress and based on the theoretical work developed during George G. Vega Yon’s doctoral thesis.

    Installation

    The development version from GitHub with:

    # install.packages("devtools")
    devtools::install_github("USCbiostats/geese")

    Examples

    Simulating annotations (two different sets)

    library(geese)
    
    # Preparing data
    n <- 100L
    annotations <- replicate(n * 2 - 1, c(9, 9), simplify = FALSE)
    
    # Random tree
    set.seed(31)
    tree <- aphylo::sim_tree(n)$edge - 1L
    
    # Sorting by the second column
    tree <- tree[order(tree[, 2]), ]
    
    duplication <- sample.int(
      n = 2, size = n * 2 - 1, replace = TRUE, prob = c(.4, .6)
      ) == 1
    
    # Reading the data in
    amodel <- new_geese(
      annotations = annotations,
      geneid = c(tree[, 2], n),
      parent = c(tree[, 1], -1),
      duplication = duplication
      )
    
    # Preparing the model
    term_gains(amodel, 0:1, duplication = 1)
    term_loss(amodel, 0:1, duplication = 1)
    term_gains(amodel, 0:1, duplication = 0)
    term_loss(amodel, 0:1, duplication = 0)
    term_maxfuns(amodel, 0, 1, duplication = 2)
    init_model(amodel)
    #> Initializing nodes in Geese (this could take a while)...
    #> ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| done.
    
    # Testing
    params <- c(
      # Gains spe
      2, 1.5,
      # Loss
      -2, -1.5,
      # Gains spe
      -2, -1,
      # Loss spe
      -4, -4,
      # Max funs
      2, 
      # Root probabilities
      -10, -10
    )
    names(params) <- c(
      "gain0 dupl", "gain1 dupl",
      "loss0 dupl", "loss1 dupl",
      "gain0 spe", "gain1 spe",
      "loss0 spe", "loss1 spe",
      "onefun", 
      "root0", "root1"
      )
    
    likelihood(amodel, params*1) # Equals 1 b/c all missings
    #> [1] 1
    
    # Simulating data
    fake1 <- sim_geese(p = amodel, par = params, seed = 212)
    fake2 <- sim_geese(p = amodel, par = params)
    
    # Removing interior node data
    is_interior <- which(tree[,2] %in% tree[,1])
    is_leaf     <- which(!tree[,2] %in% tree[,1])
    # for (i in is_interior) {
    #   fake1[[i]] <- rep(9, 2)
    #   fake2[[i]] <- rep(9, 2)
    # }

    We can now visualize either of the annotations using the aphylo package.

    library(aphylo)
    #> Loading required package: ape
    ap <- aphylo_from_data_frame(
      tree        = as.phylo(tree), 
      annotations = data.frame(
        id = c(tree[, 2], n),
        do.call(rbind, fake1)
        )
    )
    plot(ap)

    Model fitting MLE

    # Creating the object
    # Creating the object
    amodel <- new_geese(
      annotations = fake1,
      geneid      = c(tree[, 2], n),
      parent      = c(tree[, 1],-1),
      duplication = duplication
      )
    
    # Adding the model terms
    term_gains(amodel, 0:1, duplication = 1)
    term_loss(amodel, 0:1, duplication = 1)
    term_gains(amodel, 0:1, duplication = 0)
    term_loss(amodel, 0:1, duplication = 0)
    term_maxfuns(amodel, 0, 1, duplication = 2)
    init_model(amodel)
    #> Initializing nodes in Geese (this could take a while)...
    #> ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| done.
    
    print(amodel)
    #> GEESE
    #> INFO ABOUT PHYLOGENY
    #> # of functions           : 2
    #> # of nodes [int; leaf]   : [99; 100]
    #> # of ann. [zeros; ones]  : [83; 117]
    #> # of events [dupl; spec] : [43; 56]
    #> Largest polytomy         : 2
    #> 
    #> INFO ABOUT THE SUPPORT
    #> Num. of Arrays       : 396
    #> Support size         : 8
    #> Support size range   : [1, 1]
    #> Transform. Fun.      : no
    #> Model terms (9)    :
    #>  - Gains 0 at duplication
    #>  - Gains 1 at duplication
    #>  - Loss 0 at duplication
    #>  - Loss 1 at duplication
    #>  - Gains 0 at speciation
    #>  - Gains 1 at speciation
    #>  - Loss 0 at speciation
    #>  - Loss 1 at speciation
    #>  - Genes with [0, 1] funs
    
    # Finding MLE
    ans_mle <- geese_mle(amodel, hessian = TRUE, ncores = 4)
    ans_mle
    #> $par
    #>  [1]  2.327179  1.553591 -1.729575 -1.833682 -1.590516 -1.119200 -3.823851
    #>  [8] -2.864298  1.982499 -1.465843  4.366549
    #> 
    #> $value
    #> [1] -109.7751
    #> 
    #> $counts
    #> function gradient 
    #>     1002       NA 
    #> 
    #> $convergence
    #> [1] 1
    #> 
    #> $message
    #> NULL
    #> 
    #> $hessian
    #>               [,1]          [,2]          [,3]         [,4]          [,5]
    #>  [1,] -4.206819071  0.5959524394  0.8862856191 -1.721987653 -1.503185e-01
    #>  [2,]  0.595952439 -5.1501119636 -2.3668333888  2.589829846  2.739261e-02
    #>  [3,]  0.886285619 -2.3668333888 -6.9892574608  1.273369396  9.894126e-03
    #>  [4,] -1.721987653  2.5898298457  1.2733693957 -5.950797128 -3.604817e-02
    #>  [5,] -0.150318497  0.0273926144  0.0098941264 -0.036048174 -1.372080e+00
    #>  [6,]  0.020065546 -0.0867748664 -0.0605347044  0.373968106  4.557307e-02
    #>  [7,]  0.238633328 -0.0203662864 -0.2858568173  0.088855117 -5.867635e-02
    #>  [8,] -0.169421696  0.5298915990  0.1330584389 -0.704884567  2.255319e-01
    #>  [9,]  2.314439286  4.1601766227 -3.0270645492 -5.257577271  6.883251e-01
    #> [10,] -0.020862576 -0.0004507292 -0.0234848407  0.008509284  1.834480e-02
    #> [11,]  0.000175195 -0.0036292338 -0.0001882725 -0.003219606  2.817835e-05
    #>                [,6]         [,7]          [,8]          [,9]         [,10]
    #>  [1,]  0.0200655457  0.238633328 -0.1694216962  2.314439e+00 -2.086258e-02
    #>  [2,] -0.0867748664 -0.020366286  0.5298915990  4.160177e+00 -4.507292e-04
    #>  [3,] -0.0605347044 -0.285856817  0.1330584389 -3.027065e+00 -2.348484e-02
    #>  [4,]  0.3739681063  0.088855117 -0.7048845667 -5.257577e+00  8.509284e-03
    #>  [5,]  0.0455730742 -0.058676354  0.2255319007  6.883251e-01  1.834480e-02
    #>  [6,] -1.7555584648  0.187628157  0.5698203367  1.306991e+00  2.208491e-04
    #>  [7,]  0.1876281566 -1.111934470  0.0777368676 -1.058568e+00 -1.325888e-02
    #>  [8,]  0.5698203367  0.077736868 -2.5204264773 -2.774906e+00  7.558960e-03
    #>  [9,]  1.3069908000 -1.058567779 -2.7749056741 -1.941377e+01 -1.233878e-02
    #> [10,]  0.0002208491 -0.013258884  0.0075589597 -1.233878e-02 -6.093654e-03
    #> [11,] -0.0005919283 -0.000109976  0.0001258655  4.454019e-04 -3.267786e-05
    #>               [,11]
    #>  [1,]  1.751950e-04
    #>  [2,] -3.629234e-03
    #>  [3,] -1.882725e-04
    #>  [4,] -3.219606e-03
    #>  [5,]  2.817835e-05
    #>  [6,] -5.919283e-04
    #>  [7,] -1.099760e-04
    #>  [8,]  1.258655e-04
    #>  [9,]  4.454019e-04
    #> [10,] -3.267786e-05
    #> [11,] -9.352519e-04
    
    # Prob of each gene gaining a single function
    transition_prob(
      amodel,
      params = rep(0, nterms(amodel) - nfuns(amodel)), 
      duplication = TRUE, state = c(FALSE, FALSE),
      array = matrix(c(1, 0, 0, 1), ncol=2)
    )
    #> [1] 0.0625

    Model fitting MCMC

    set.seed(122)
    ans_mcmc <- geese_mcmc(
      amodel,
      nsteps  = 20000,
      kernel  = fmcmc::kernel_ram(warmup = 5000), 
      prior   = function(p) c(
          dlogis(
            p,
            scale = 4,
            location = c(
              rep(0, nterms(amodel) - nfuns(amodel)),
              rep(-5, nfuns(amodel))
              ),
            log = TRUE
            )
      ), ncores = 2L)

    We can take a look at the results like this:

    #> 
    #> Iterations = 15000:20000
    #> Thinning interval = 1 
    #> Number of chains = 1 
    #> Sample size per chain = 5001 
    #> 
    #> 1. Empirical mean and standard deviation for each variable,
    #>    plus standard error of the mean:
    #> 
    #>                            Mean     SD Naive SE Time-series SE
    #> Gains 0 at duplication   2.9015 0.8051 0.011385        0.09034
    #> Gains 1 at duplication   1.6914 0.5653 0.007994        0.04934
    #> Loss 0 at duplication   -2.0287 0.5349 0.007563        0.05280
    #> Loss 1 at duplication   -1.8866 0.6442 0.009110        0.08533
    #> Gains 0 at speciation  -12.1932 3.5435 0.050107        1.15176
    #> Gains 1 at speciation   -0.1454 0.6609 0.009345        0.06815
    #> Loss 0 at speciation    -2.9909 0.5184 0.007331        0.04458
    #> Loss 1 at speciation    -5.1655 1.9408 0.027444        0.39515
    #> Genes with [0, 1] funs   2.2578 0.4569 0.006461        0.06265
    #> Root 1                  -1.0470 3.0807 0.043564        0.94842
    #> Root 2                  -4.2756 4.2474 0.060061        1.59284
    #> 
    #> 2. Quantiles for each variable:
    #> 
    #>                            2.5%      25%      50%      75%   97.5%
    #> Gains 0 at duplication   1.4054   2.3030   2.8777   3.4337  4.5624
    #> Gains 1 at duplication   0.5451   1.3327   1.7001   2.0905  2.7559
    #> Loss 0 at duplication   -3.0657  -2.3764  -2.0460  -1.6762 -0.9765
    #> Loss 1 at duplication   -3.1944  -2.3389  -1.8797  -1.4119 -0.6868
    #> Gains 0 at speciation  -18.2113 -14.9130 -12.1597 -10.1648 -3.6030
    #> Gains 1 at speciation   -1.5472  -0.5998  -0.1365   0.3416  1.0736
    #> Loss 0 at speciation    -4.0181  -3.3470  -2.9738  -2.6539 -2.0354
    #> Loss 1 at speciation    -9.4815  -6.5157  -4.8115  -3.6121 -2.3045
    #> Genes with [0, 1] funs   1.4263   1.9483   2.2481   2.5599  3.2238
    #> Root 1                  -5.9435  -3.5719  -1.4757   1.4858  4.7924
    #> Root 2                 -14.2253  -5.9892  -3.8179  -1.5920  3.3555
    
    par_estimates <- colMeans(
      window(ans_mcmc, start = end(ans_mcmc)*3/4)
      )
    ans_pred <- predict_geese(
      amodel, par_estimates,
      leave_one_out = TRUE,
      only_annotated = TRUE
      ) |> do.call(what = "rbind")
    
    # Preparing annotations
    ann_obs <- do.call(rbind, fake1)
    
    # AUC
    (ans <- prediction_score(ans_pred, ann_obs))
    #> Prediction score (H0: Observed = Random)
    #> 
    #>  N obs.      : 199
    #>  alpha(0, 1) : 0.40, 0.60
    #>  Observed    : 0.68 ***
    #>  Random      : 0.52 
    #>  P(<t)       : 0.0000
    #> --------------------------------------------------------------------------------
    #> Values scaled to range between 0 and 1, 1 being best.
    #> 
    #> Significance levels: *** p < .01, ** p < .05, * p < .10
    #> AUC 0.80.
    #> MAE 0.32.
    
    plot(ans$auc, xlim = c(0,1), ylim = c(0,1))

    Using a flock

    GEESE models can be grouped (pooled) into a flock.

    flock <- new_flock()
    
    # Adding first set of annotations
    add_geese(
      flock,
      annotations = fake1,
      geneid      = c(tree[, 2], n),
      parent      = c(tree[, 1],-1),
      duplication = duplication  
    )
    
    # Now the second set
    add_geese(
      flock,
      annotations = fake2,
      geneid      = c(tree[, 2], n),
      parent      = c(tree[, 1],-1),
      duplication = duplication  
    )
    
    # Persistence to preserve parent state
    term_gains(flock, 0:1, duplication = 1)
    term_loss(flock, 0:1, duplication = 1)
    term_gains(flock, 0:1, duplication = 0)
    term_loss(flock, 0:1, duplication = 0)
    term_maxfuns(flock, 0, 1, duplication = 2)
    
    
    # We need to initialize to do all the accountintg
    init_model(flock)
    #> Initializing nodes in Flock (this could take a while)...
    #> ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| done.
    
    print(flock)
    #> FLOCK (GROUP OF GEESE)
    #> INFO ABOUT THE PHYLOGENIES
    #> # of phylogenies         : 2
    #> # of functions           : 2
    #> # of ann. [zeros; ones]  : [165; 235]
    #> # of events [dupl; spec] : [86; 112]
    #> Largest polytomy         : 2
    #> 
    #> INFO ABOUT THE SUPPORT
    #> Num. of Arrays       : 792
    #> Support size         : 8
    #> Support size range   : [1, 1]
    #> Transform. Fun.      : no
    #> Model terms (9)    :
    #>  - Gains 0 at duplication
    #>  - Gains 1 at duplication
    #>  - Loss 0 at duplication
    #>  - Loss 1 at duplication
    #>  - Gains 0 at speciation
    #>  - Gains 1 at speciation
    #>  - Loss 0 at speciation
    #>  - Loss 1 at speciation
    #>  - Genes with [0, 1] funs

    We can use the same program to fit the MCMC

    set.seed(122)
    ans_mcmc2 <- geese_mcmc(
      flock,
      nsteps  = 20000,
      kernel  = fmcmc::kernel_ram(warmup = 2000), 
      prior   = function(p) dlogis(p, scale = 2, log = TRUE),
      ncores  = 2
      )
    op <- par(
      mfrow = c(4, 2), #tcl=.5,
      las=1, mar = c(3,3,1,0),
      bty = "n", oma = rep(1,4)
      )
    for (i in 1:ncol(ans_mcmc2)) {
      tmpx <- window(ans_mcmc2, start = 10000)[,i,drop=FALSE]
      
      coda::traceplot(
        tmpx, smooth = FALSE, ylim = c(-11,11), col = rgb(0, 128, 128, maxColorValue = 255), 
        main = names(params)[i]
        )
      abline(h = params[i], lty=3, lwd=4, col = "red")
    }

    par(op)

    summary(window(ans_mcmc2, start = 10000))
    #> 
    #> Iterations = 10000:20000
    #> Thinning interval = 1 
    #> Number of chains = 1 
    #> Sample size per chain = 10001 
    #> 
    #> 1. Empirical mean and standard deviation for each variable,
    #>    plus standard error of the mean:
    #> 
    #>                            Mean     SD Naive SE Time-series SE
    #> Gains 0 at duplication  2.39204 0.4707 0.004707        0.03019
    #> Gains 1 at duplication  1.85804 0.4925 0.004925        0.02789
    #> Loss 0 at duplication  -2.15114 0.4451 0.004451        0.03310
    #> Loss 1 at duplication  -1.50477 0.4427 0.004427        0.03176
    #> Gains 0 at speciation  -4.10744 2.9954 0.029952        0.76564
    #> Gains 1 at speciation  -0.84969 0.8242 0.008241        0.09520
    #> Loss 0 at speciation   -3.16554 0.6535 0.006535        0.05307
    #> Loss 1 at speciation   -4.88115 2.0161 0.020160        0.32971
    #> Genes with [0, 1] funs  2.09933 0.3703 0.003702        0.02921
    #> Root 1                  0.02501 2.6487 0.026486        0.45210
    #> Root 2                 -1.07238 2.9197 0.029195        0.56841
    #> 
    #> 2. Quantiles for each variable:
    #> 
    #>                            2.5%    25%      50%     75%   97.5%
    #> Gains 0 at duplication   1.5050  2.068  2.37614  2.7239  3.3368
    #> Gains 1 at duplication   0.9237  1.511  1.84256  2.2029  2.8299
    #> Loss 0 at duplication   -3.0413 -2.451 -2.14564 -1.8533 -1.2836
    #> Loss 1 at duplication   -2.3961 -1.809 -1.51894 -1.1984 -0.6178
    #> Gains 0 at speciation  -11.2547 -5.414 -2.91312 -1.9486 -0.9131
    #> Gains 1 at speciation   -3.2320 -1.183 -0.72227 -0.3283  0.3280
    #> Loss 0 at speciation    -4.7209 -3.510 -3.08984 -2.7347 -2.0557
    #> Loss 1 at speciation   -10.5227 -5.326 -4.19469 -3.5823 -2.7532
    #> Genes with [0, 1] funs   1.3738  1.842  2.07762  2.3515  2.8303
    #> Root 1                  -4.7967 -1.873 -0.04377  1.5864  6.0565
    #> Root 2                  -6.5355 -3.147 -1.08668  1.1586  4.6030

    Are we doing better in AUCs?

    par_estimates <- colMeans(
      window(ans_mcmc2, start = end(ans_mcmc2)*3/4)
      )
    
    ans_pred <- predict_flock(
      flock, par_estimates,
      leave_one_out = TRUE,
      only_annotated = TRUE
      ) |>
      lapply(do.call, what = "rbind") |>
      do.call(what = rbind)
    
    # Preparing annotations
    ann_obs <- rbind(
      do.call(rbind, fake1),
      do.call(rbind, fake2)
    )
    
    # AUC
    (ans <- prediction_score(ans_pred, ann_obs))
    #> Prediction score (H0: Observed = Random)
    #> 
    #>  N obs.      : 398
    #>  alpha(0, 1) : 0.42, 0.58
    #>  Observed    : 0.72 ***
    #>  Random      : 0.51 
    #>  P(<t)       : 0.0000
    #> --------------------------------------------------------------------------------
    #> Values scaled to range between 0 and 1, 1 being best.
    #> 
    #> Significance levels: *** p < .01, ** p < .05, * p < .10
    #> AUC 0.86.
    #> MAE 0.28.
    plot(ans$auc)

    Limiting the support

    In this example, we use the function rule_limit_changes() to apply a constraint to the support of the model. This takes the first two terms (0 and 1 since the index is in C++) and restricts the support to states where there are between $[0, 2]$ changes, at most.

    This should be useful when dealing with multiple functions or pylotomies.

    # Creating the object
    amodel_limited <- new_geese(
      annotations = fake1,
      geneid      = c(tree[, 2], n),
      parent      = c(tree[, 1],-1),
      duplication = duplication
      )
    
    # Adding the model terms
    term_gains(amodel_limited, 0:1)
    term_loss(amodel_limited, 0:1)
    term_maxfuns(amodel_limited, 1, 1)
    term_overall_changes(amodel_limited, TRUE)
    
    # At most one gain
    rule_limit_changes(amodel_limited, 5, 0, 2)
    
    # We need to initialize to do all the accounting
    init_model(amodel_limited)
    #> Initializing nodes in Geese (this could take a while)...
    #> ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| done.
    
    # Is limiting the support any useful?
    support_size(amodel_limited)
    #> [1] 31

    Since we added the constraint based on the term term_overall_changes(), we now need to fix the parameter at 0 (i.e., no effect) during the MCMC model:

    set.seed(122)
    ans_mcmc2 <- geese_mcmc(
      amodel_limited,
      nsteps  = 20000,
      kernel  = fmcmc::kernel_ram(
        warmup = 2000,
        fixed  = c(FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, FALSE, FALSE)
        ), 
      prior   = function(p) dlogis(p, scale = 2, log = TRUE)
      )

    #> 
    #> Iterations = 15000:20000
    #> Thinning interval = 1 
    #> Number of chains = 1 
    #> Sample size per chain = 5001 
    #> 
    #> 1. Empirical mean and standard deviation for each variable,
    #>    plus standard error of the mean:
    #> 
    #>                                           Mean     SD Naive SE Time-series SE
    #> Gains 0 at duplication                 1.06329 0.8555 0.012097        0.06474
    #> Gains 1 at duplication                 1.00857 0.7727 0.010927        0.04945
    #> Loss 0 at duplication                 -1.44630 0.7529 0.010647        0.05664
    #> Loss 1 at duplication                 -0.65287 0.7342 0.010383        0.04529
    #> Genes with [1, 1] funs at duplication  1.04183 0.3736 0.005283        0.02301
    #> Overall changes at duplication         0.00000 0.0000 0.000000        0.00000
    #> Root 1                                -0.05519 3.1452 0.044476        0.35121
    #> Root 2                                -0.20215 3.2415 0.045837        0.41755
    #> 
    #> 2. Quantiles for each variable:
    #> 
    #>                                          2.5%     25%      50%     75%    97.5%
    #> Gains 0 at duplication                -0.5104  0.5096  1.07974  1.5870  2.75348
    #> Gains 1 at duplication                -0.3511  0.4883  0.97593  1.4741  2.72087
    #> Loss 0 at duplication                 -3.0046 -1.9420 -1.39766 -0.9289 -0.05484
    #> Loss 1 at duplication                 -2.0463 -1.1631 -0.65509 -0.2187  0.87313
    #> Genes with [1, 1] funs at duplication  0.3743  0.7911  1.01242  1.2674  1.88310
    #> Overall changes at duplication         0.0000  0.0000  0.00000  0.0000  0.00000
    #> Root 1                                -6.4868 -2.1595  0.08435  2.1941  5.72248
    #> Root 2                                -6.6845 -2.0668 -0.14747  1.7791  6.08394
    

    Code of Conduct

    Please note that the aphylo2 project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.

    Visit original content creator repository https://github.com/USCbiostats/geese
  • payload-imagekit

    Payload CMS ImageKit Plugin

    This plugin sync your image to ImageKit.

    Installation

    npm install payloadcms-plugin-imagekit

    Usage

    Install this plugin within your Payload as follows:

    import { buildConfig } from "payload/config";
    import imagekitPlugin from "payloadcms-plugin-imagekit";
    
    export default buildConfig({
      // ...
      plugins: [
        imagekitPlugin({
          config: {
            publicKey: "your_public_api_key",
            privateKey: "your_private_api_key",
            endpoint: "https://ik.imagekit.io/your_imagekit_id/",
          },
          collections: {
            media: {
              uploadOption: {
                folder: "some folder",
                extensions: [
                  {
                    name: "aws-auto-tagging",
                    minConfidence: 80, // only tags with a confidence value higher than 80% will be attached
                    maxTags: 10, // a maximum of 10 tags from aws will be attached
                  },
                  {
                    name: "google-auto-tagging",
                    minConfidence: 70, // only tags with a confidence value higher than 70% will be attached
                    maxTags: 10, // a maximum of 10 tags from google will be attached
                  },
                ],
              },
              savedProperties: ["url", "AITags"],
            },
          },
        }),
      ],
    });

    Plugin options

    This plugin have 1 parameter that contain an object.

    Option Description
    config (required) ImageKit Config ImageKitOptions
    collections (optional) Collections options

    config

    Type object

    • publicKey: type string
    • privateKey: type string
    • endpoint: type string;

    collections

    Type object

    • [key] (required)
      type: string
      description: Object keys should be PayloadCMS collection name that store the media/image.
      value type: object
      value options:

      • uploadOption (optional)
        type: object
        type detail: TUploadOption. Except file.
        description: An options to saved in ImageKit.

      • savedProperties (optional)
        type: []string
        type detail: TImageKitProperties. Except thumbnailUrl and fileId.
        description: An object that saved to PayloadCMS/Database that you may need it for your Frontend.

      • disableLocalStorage (optional)
        type: boolean
        default: true
        description: Completely disable uploading files to disk locally. More

    Payload Cloud

    If your project is hosted using Payload Cloud – their default file storage solution will conflict with this plugin. You will need to disable file storage via the Payload Cloud plugin like so:

    // ...
    plugins: [
      payloadCloud({
        storage: false, // Disable file storage
      }),
      imagekitPlugin({
        // Your imagekit config here
      }),
    ],
    // ...

    Screenshot

    image

    Visit original content creator repository https://github.com/novanda1/payload-imagekit
  • noobies.ai

    logo

    content License

    Description

    noobies.ai is an open-source project designed to empower users in AI-driven content generation. It provides an extensive set of tools for creating diverse content, including blogs, images, videos, and audio. The project aims to simplify AI-based content creation while ensuring accessibility and user-friendliness.

    Screenshots

    image

    Features

    Blog Generation

    The project includes a powerful blog generation module, allowing users to effortlessly create engaging written content.

    from noobies_ai.core import blog_generator
    
    # Example usage
    generated_blog = blog_generator.generate_blog(topic="AI in 2024")

    Video Generation

    Generate dynamic video content with ease using the video generation module.

    from noobies_ai.core import video_generator
    
    # Example usage
    generated_video = video_generator.generate_video(topic="Future Technologies")

    AI Utilities

    ImageAI

    Harness the power of ImageAI to process and analyze images.

    from noobies_ai.core.utils.AI import imageAI
    
    # Example usage
    image_labels = imageAI.process_image("path/to/image.jpg")

    TextAI

    Generate AI-driven text content effortlessly.

    from noobies_ai.core.utils.AI import textAI
    
    # Example usage
    generated_text = textAI.generate_text(prompt="Describe a futuristic city.")

    AudioAI

    Explore the capabilities of AudioAI for audio-related tasks.

    from noobies_ai.core.utils.AI import audioAI
    
    # Example usage
    transcription = audioAI.transcribe_audio("path/to/audio.mp3")

    Content Conversion

    Convert content seamlessly between different formats.

    from noobies_ai.core.utils.converter import blog_converter, image_converter, video_converter
    
    # Example usage
    converted_blog = blog_converter.convert_to_blog(generated_text)
    converted_image = image_converter.convert_to_image(generated_blog)
    converted_video = video_converter.convert_to_video(generated_text)

    Content Download

    Download various types of content effortlessly.

    from noobies_ai.core.utils.downloader import audio_downloader, image_downloader, text_downloader, video_downloader
    
    # Example usage
    audio_downloader.download_audio("https://example.com/audio.mp3", destination="downloads/")
    image_downloader.download_image("https://example.com/image.jpg", destination="downloads/")
    text_downloader.download_text("https://example.com/text.txt", destination="downloads/")
    video_downloader.download_video("https://example.com/video.mp4", destination="downloads/")

    Project Structure

    Project Structure

    The project is organized into the following main components:

    • core: Contains the core functionalities of the project.

      • blog_generator.py: Module for generating blog content.
      • video_generator.py: Module for generating video content.
      • utils: Utilities module containing AI-related tools.
        • AI: Submodule for AI functionalities.
          • audioAI.py: Module for handling audio AI.
          • imageAI.py: Module for handling image AI.
          • textAI.py: Module for handling text AI.
          • videoAI.py: Module for handling video AI.
          • syntax: Submodule for syntax-related tools.
            • blog_syntax.py: Module for blog syntax.
            • video_syntax.py: Module for video syntax.
          • prompt: Submodule for handling AI prompts.
            • audio_prompts.py: Module for audio prompts.
            • image_prompts.py: Module for image prompts.
            • text_prompt.py: Module for text prompts.
            • video_prompt.py: Module for video prompts.
        • converter: Submodule for converting content.
          • blog_converter.py: Module for converting blog content.
          • image_converter.py: Module for converting image content.
          • video_converter.py: Module for converting video content.
        • downloader: Submodule for downloading content.
          • audio_downloader.py: Module for downloading audio content.
          • image_downloader.py: Module for downloading image content.
          • text_downloader.py: Module for downloading text content.
          • video_downloader.py: Module for downloading video content.
        • extractor: Placeholder module for content extraction.
    • static: Directory for static files, including images and logos.

    • pages: Directory containing additional project pages.

    • app.py: Main application script using Streamlit for the user interface.

    Streamlit Application

    The primary interface for interacting with noobies.ai is a Streamlit web application. Follow these steps to run the application locally:

    1. Install dependencies:

      pip install -r requirements.txt
    2. Run the Streamlit app:

      streamlit run app.py

      This will launch the application in your default web browser.

    3. Explore the Features:

      Navigate through the different sections of the application to explore and use the various content generation features. Interact with the intuitive user interface to leverage the power of AI in content creation.

      Streamlit App Screenshot

    Usage Examples

    Blog Generation

    from noobies_ai.core import blog_generator
    
    # Example usage
    generated_blog = blog_generator.generate_blog(topic="AI in 2024")

    Dependencies

    Ensure you have the required dependencies installed by running:

    pip install -r requirements.txt

    Contributing

    We welcome contributions! Feel free to open issues, submit pull requests, or provide feedback.

    License

    This project is licensed under the MIT License.


    Visit original content creator repository https://github.com/0aaryan/noobies.ai
  • talkomatic-classic

    Visit original content creator repository
    https://github.com/ZackiBoiz/talkomatic-classic

  • retail-customer-segmentation-forecasting

    Exploring Customer Segmentation and Customer Lifetime Value for Sales Forecasting


    Background

    Welcome to the data exploration journey of understanding customer behavior and enhancing sales forecasting for a UK-based company specializing in unique all-occasion gifts. Our goal is to unlock valuable insights from customer data and historical sales, laying the foundation for effective customer segmentation and improved sales predictions.

    Objectives

    • Understand the Data:

    • Exploratory Data Analysis (EDA):

      • Perform comprehensive exploratory data analysis to uncover hidden patterns, trends, and anomalies within the dataset.
    • Data Preparation:

      • Preprocess and prepare the data for subsequent analyses, ensuring its suitability for modeling.
    • Customer Segmentation:

      • Utilize advanced segmentation techniques to categorize customers based on their behavior, preferences, and historical interactions.
    • Forecasting Models:

      • Develop and implement tailored forecast models for each customer segment, aiming for accurate sales predictions.
    • Results Presentation:

      • Present the findings, insights, and actionable recommendations in a clear and concise manner.

    Data Description

    The heart of our exploration lies in the Online Retail II dataset, offering a real-world snapshot of online retail transactions. The primary data elements include:

    online_retail_II.xlsx
    This comprehensive table captures records for all created orders, boasting 1,067,371 rows and 8 columns. With a size of 44.55MB, it serves as a rich source of information for our analysis.

    Data Element Type Description
    Invoice object Invoice number, uniquely assigned to each transaction. If starting with ‘c’, it signifies a cancellation.
    StockCode object Unique product (item) code assigned to each distinct product.
    Description object Descriptive name of the product (item).
    Quantity int64 Quantities of each product (item) per transaction.
    InvoiceDate datetime Date and time of the invoice generation.
    Price float64 Unit price of the product in pounds (£).
    Customer ID int64 Unique customer identifier with a 5-digit integral number.
    Country object Country name where the customer resides.

    File Tree

    .
    ├── data
    │   ├── 2009-2010.csv
    │   └── 2010-2011.csv
    ├── models
    │   └── t2v
    ├── notebooks
    │   ├── ds4a_retail_challenge.ipynb
    │   ├── gensim_lda.py
    │   └── utils.py
    ├── README.md
    └── requirements.txt
    

    References

    1. https://www.machinelearningplus.com/nlp/topic-modeling-gensim-python/
    2. https://github.com/nicodv/kmodes
    3. https://towardsdatascience.com/understanding-topic-coherence-measures-4aa41339634c
    4. https://medium.com/@thomas.shawcarveth/market-segmentation-and-predicting-marketing-success-with-data-science-f48c99e3b4e1
    5. https://www.geeksforgeeks.org/rfm-analysis-analysis-using-python/
    6. https://www.machinelearningplus.com/time-series/arima-model-time-series-forecasting-python/

    Visit original content creator repository
    https://github.com/evansphillips/retail-customer-segmentation-forecasting

  • Album-Art-Bottle-LEDs

    Album art bottle LEDs

    Picture of project on desk

    I got a Pimoroni Wireless Plasma Kit and wanted to do something fun with it! I had the idea to create a custom palette of colors based on what I was listening to!

    Album art is often iconic and I thought it’d be cool to get a subtle hint of the colors of my favorite album covers as their songs play!

    I realized that last.fm shares album art over its API and as a long time member, that seemed like a great place to start.

    By combining code for API access, dominant color extraction, NeoPixel updates and socket networking I was able to throw this together in an evening.

    Materials

    • A last.fm account to pull from w/ API key
    • A ‘server’ (such as a Raspberry Pi Pico) to talk to the LEDs
    • A ‘client’ computer which:
      • Plays music / can obtain the currently playing song (eg mpc/mpd)
      • Generate color palettes via API calls
      • [Optionally] detects BPM (via eg bpm from bpm-tools)

    Workflow

    I really wish the pico could do all of the image processing but jpeg decoding let alone kmeans is probably a tall order…so I arrived at this slightly hacky client/server architecture.

    The client code (running on a ‘real’ computer) does most of the heavy lifting by:

    • Checking last.fm for the most recently scrobbled track
    • Downloading it’s cover art
    • Extract the NUM_COLORS most common colors
    • Padding that out to NUM_LEDS and sending the udpdate to the server.

    Here’s what that looks like, in the client’s terminal; note the track name, album art visualization and palette preview:

    Picture of the visualization in terminal

    (There are now 4 palette extraction algorithms and all of them are previewed though only one is sent to the server!)

    The server code, running on the pico, is resonsible for:

    • Accepting “palette” updates (which are a list of NUM_LED RGB values)
    • Managing the LED colors

    Running the project

    • Have a look at the constants at the top of the client / server and see if you wanna make any adjustments
    • Install the libraries in requirements.txt on your client
    • Use thonny or something like it to run the server code on the pi
      • It’ll glow green when it’s ready for a client connection
    • Run the client code (once you add the API key and username) on a ‘real’ computer to send palettes to the server

    Good Stuff

    • Proud of janky palette transition logic; exciting when songs switch!
    • Gentle animation is nice
    • Because the pico has wifi, it can be anywhere in your home! On a high shelf, even. Wireless is cool 🙂
    • The updates are pretty slick
    • Once a palette has been recieved it’ll keep on displaying it until a new one is recieved.
    • Pretty pleased by the threading code on the pico for handling animation and network updates 🙂

    BPM support

    If you set a BPM_CMD you should be able to have the lights pulse at the BPM of the audio!

    I use a shell script like this:

    #!/bin/bash
    BPMPATH=`which bpm`
    FFMPEGPATH=`which ffmpeg`
    BCPATH=`which bc`
    
    BASE=/mnt/media/music/music
    PATH=`mpc status --format "%file%" | head -n1`
    
    FULLPATH=$BASE/$PATH
    
    # convert to raw audio using ffmpeg + measure bpm!
    # Thanks to Mark Hills for `bpm` and https://gist.github.com/brimston3/34dbb439442a723313b019b92931887b !
    bpm=$($FFMPEGPATH -hide_banner -loglevel error -vn -i "$FULLPATH" -ar 44100 -ac 1 -f f32le pipe:1 | $BPMPATH)
    #echo "BPM=$bpm"
    
    # Calculate Delay
    delay=$($BCPATH -l <<< 60.0/$bpm)
    #echo "Delay=$delay"
    echo $delay

    Future Work

    Here are a few ideas I have for future improvements…

    Client:

    • Automatically choose the best number of clusters?
    • TUI interace:
      • q for quit
      • 1,2,3,4 etc for different color extraction methods?
    • cache recent songs:
      • can’t really cache checks with last.fm; move to a model where we search last.fm for the song on change?
    • API retries?
      • more interesting fake patterns if missing a song!
    • extract hues rather than brightnesses?

    Server:

    • It’s possible having all the LEDs stuff into a bottle isn’t given the best sense of the palette — might look cool mounted on a wall or somewhere else!
      • (train) lantern?
      • Mount them along a wall or around a window frame?
    • More interesting animation patterns?
    • create multiple “endpoints” for various controls
      • changing animation speed?
      • explicitly set LEDs
      • test pattern?

    Bugs to fix:

    Traceback (most recent call last):
      File "/home/jesse/projects/album_art_bottle_LEDs/plasma_client.py", line 3
    40, in <module>
        main()
      File "/home/jesse/projects/album_art_bottle_LEDs/plasma_client.py", line 3
    21, in main
        colors = generate_palette(session, methods)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/jesse/projects/album_art_bottle_LEDs/plasma_client.py", line 2
    45, in generate_palette
        payload = get_info_from_last_scrobble()
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/jesse/projects/album_art_bottle_LEDs/plasma_client.py", line 1
    55, in get_info_from_last_scrobble
        last_track = response.json()['recenttracks']['track'][0]
                     ~~~~~~~
    
    Visit original content creator repository https://github.com/heavyimage/Album-Art-Bottle-LEDs
  • Album-Art-Bottle-LEDs

    Album art bottle LEDs

    Picture of project on desk

    I got a Pimoroni Wireless Plasma Kit and wanted to do something fun with it! I had the idea to create a custom palette of colors based on what I was listening to!

    Album art is often iconic and I thought it’d be cool to get a subtle hint of the colors of my favorite album covers as their songs play!

    I realized that last.fm shares album art over its API and as a long time member, that seemed like a great place to start.

    By combining code for API access, dominant color extraction, NeoPixel updates and socket networking I was able to throw this together in an evening.

    Materials

    • A last.fm account to pull from w/ API key
    • A ‘server’ (such as a Raspberry Pi Pico) to talk to the LEDs
    • A ‘client’ computer which:
      • Plays music / can obtain the currently playing song (eg mpc/mpd)
      • Generate color palettes via API calls
      • [Optionally] detects BPM (via eg bpm from bpm-tools)

    Workflow

    I really wish the pico could do all of the image processing but jpeg decoding let alone kmeans is probably a tall order…so I arrived at this slightly hacky client/server architecture.

    The client code (running on a ‘real’ computer) does most of the heavy lifting by:

    • Checking last.fm for the most recently scrobbled track
    • Downloading it’s cover art
    • Extract the NUM_COLORS most common colors
    • Padding that out to NUM_LEDS and sending the udpdate to the server.

    Here’s what that looks like, in the client’s terminal; note the track name, album art visualization and palette preview:

    Picture of the visualization in terminal

    (There are now 4 palette extraction algorithms and all of them are previewed though only one is sent to the server!)

    The server code, running on the pico, is resonsible for:

    • Accepting “palette” updates (which are a list of NUM_LED RGB values)
    • Managing the LED colors

    Running the project

    • Have a look at the constants at the top of the client / server and see if you wanna make any adjustments
    • Install the libraries in requirements.txt on your client
    • Use thonny or something like it to run the server code on the pi
      • It’ll glow green when it’s ready for a client connection
    • Run the client code (once you add the API key and username) on a ‘real’ computer to send palettes to the server

    Good Stuff

    • Proud of janky palette transition logic; exciting when songs switch!
    • Gentle animation is nice
    • Because the pico has wifi, it can be anywhere in your home! On a high shelf, even. Wireless is cool 🙂
    • The updates are pretty slick
    • Once a palette has been recieved it’ll keep on displaying it until a new one is recieved.
    • Pretty pleased by the threading code on the pico for handling animation and network updates 🙂

    BPM support

    If you set a BPM_CMD you should be able to have the lights pulse at the BPM of the audio!

    I use a shell script like this:

    #!/bin/bash
    BPMPATH=`which bpm`
    FFMPEGPATH=`which ffmpeg`
    BCPATH=`which bc`
    
    BASE=/mnt/media/music/music
    PATH=`mpc status --format "%file%" | head -n1`
    
    FULLPATH=$BASE/$PATH
    
    # convert to raw audio using ffmpeg + measure bpm!
    # Thanks to Mark Hills for `bpm` and https://gist.github.com/brimston3/34dbb439442a723313b019b92931887b !
    bpm=$($FFMPEGPATH -hide_banner -loglevel error -vn -i "$FULLPATH" -ar 44100 -ac 1 -f f32le pipe:1 | $BPMPATH)
    #echo "BPM=$bpm"
    
    # Calculate Delay
    delay=$($BCPATH -l <<< 60.0/$bpm)
    #echo "Delay=$delay"
    echo $delay

    Future Work

    Here are a few ideas I have for future improvements…

    Client:

    • Automatically choose the best number of clusters?
    • TUI interace:
      • q for quit
      • 1,2,3,4 etc for different color extraction methods?
    • cache recent songs:
      • can’t really cache checks with last.fm; move to a model where we search last.fm for the song on change?
    • API retries?
      • more interesting fake patterns if missing a song!
    • extract hues rather than brightnesses?

    Server:

    • It’s possible having all the LEDs stuff into a bottle isn’t given the best sense of the palette — might look cool mounted on a wall or somewhere else!
      • (train) lantern?
      • Mount them along a wall or around a window frame?
    • More interesting animation patterns?
    • create multiple “endpoints” for various controls
      • changing animation speed?
      • explicitly set LEDs
      • test pattern?

    Bugs to fix:

    Traceback (most recent call last):
      File "/home/jesse/projects/album_art_bottle_LEDs/plasma_client.py", line 3
    40, in <module>
        main()
      File "/home/jesse/projects/album_art_bottle_LEDs/plasma_client.py", line 3
    21, in main
        colors = generate_palette(session, methods)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/jesse/projects/album_art_bottle_LEDs/plasma_client.py", line 2
    45, in generate_palette
        payload = get_info_from_last_scrobble()
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/jesse/projects/album_art_bottle_LEDs/plasma_client.py", line 1
    55, in get_info_from_last_scrobble
        last_track = response.json()['recenttracks']['track'][0]
                     ~~~~~~~
    
    Visit original content creator repository https://github.com/heavyimage/Album-Art-Bottle-LEDs