Blog

  • MSGraph

    MSGraph

    MS Graph RnD. Porting to W10M in progress…

    My Goals

    • Main Goal. An attempt to adapt MS Graph UWP Sample via “UWP down-shifting”, for all my devices (PC / XBox , and… old sweet Microsoft Lumia 950!)
    • Super goal: reconstruct Microsoft TODO API logics… as part of Microsoft Graph.

    Screenshots

    Shot 1 Shot 2

    Progress

    • Microsoft.Graph project added (obsolete… but works)
    • Microsoft.Graph.Core project added (obsolete too)
    • Proof-of-concept

    Architecture

    Topology

    Prerequisites

    To run the completed project in this folder, you need the following:

    • Visual Studio installed on your development machine. If you do not have Visual Studio, visit the previous link for download options. (Note: This tutorial was written with Visual Studio 2019 version 16.5.0. The steps in this guide may work with other versions, but that has not been tested.)
    • Either a personal Microsoft account with a mailbox on Outlook.com, or a Microsoft work or school account.

    If you don’t have a Microsoft account, there are a couple of options to get a free account:

    Register a native application with the Azure Active Directory admin center

    1. Open a browser and navigate to the Azure Active Directory admin center and login using a personal account (aka: Microsoft Account) or Work or School Account.

    2. Select Azure Active Directory in the left-hand navigation, then select App registrations under Manage.

    3. Select New registration. On the Register an application page, set the values as follows.

      • Set Name to UWP Graph Tutorial.
      • Set Supported account types to Accounts in any organizational directory and personal Microsoft accounts.
      • Under Redirect URI, change the dropdown to Public client (mobile & desktop), and set the value to https://login.microsoftonline.com/common/oauth2/nativeclient.
    4. Choose Register. On the UWP Graph Tutorial page, copy the value of the Application (client) ID and save it, you will need it in the next step.

    References

    https://docs.microsoft.com/en-us/graph/use-the-api

    https://docs.microsoft.com/en-us/graph/api/resources/todo-overview?view=graph-rest-1.0

    — [m][e] 2023

    Visit original content creator repository https://github.com/mediaexplorer74/MSGraph
  • data-structure-using-php

    About

    Implementation of Stack, Queue and Set data structure using php.

    Stack Class Available methods

    • isFull() — return true if stack is Full
    • isEmpty() — return true if stack is Empty
    • push($value) – insert item to stack
    • pop() – remove item to stack
    • getStack() – get the stack
    • peak() – return top item of the stack

    How to use?

    try {
        $stack = new Stack();
        print_r( $stack->push( 1 )->push( 2 )->push( 3 )->pop()->push( 4 )->getStack() );
    } catch ( \Throwable $th ) {
        echo $th->getMessage();
    }

    Method chaining is available for push and pop

    You can define Stack size. For example: new Stack(10). Default size is 5

    Queue Class Available methods

    • isFull() — return true if queue is Full
    • isEmpty() — return true if queue is Empty
    • enqueue($value) – insert item to queue
    • dequeue() – remove item to queue
    • getQueue() – get the queue
    • peak() – return top item of the queue
    • output() – Get details of the queue

    How to use?

    try {
        $queue = new Queue();
        print_r( $queue->enqueue( 1 )->dequeue()->enqueue( 2 )->enqueue( 3 )->enqueue( 4 )->enqueue( 5 )->getQueue() );
        $queue->dequeue()->output();
    } catch ( \Throwable $th ) {
        print_r( $th->getMessage() );
    }

    Method chaining is available for enqueue and dequeue

    You can define Queue size. For example: new Queue(10). Default size is 5

    Set Class Available methods

    • add($value) – insert item to Set
    • remove($value) – remove item to Set
    • isExists($value) – if value exists in the Set
    • getSet() – get the Set array
    • getSize() – get the Size of the Set
    • max() – Get the maximum value of the Set
    • min() – Get the minimum value of the Set

    How to use?

    try {
        $set = new Set();
        print_r( $set->add( 5 )->add( 6 )->add( 5 )->remove( 6 )->add( 9 )->getSet() );
        echo "Max value: " . $set->max() . "\n";
        echo "Set size: " . $set->getSize() . "\n";
    } catch ( \Throwable $th ) {
        echo $th->getMessage();
    }

    Method chaining is available for add and remove

    Note: Example Code is given in index.php

    Visit original content creator repository
    https://github.com/RoyHridoy/data-structure-using-php

  • tccr


    Title

    Project description goes here. This description is usually two to three lines long. It should give an overview of what the project is, eg technology used, philosophy of existence, what problem it is trying to solve, etc. If you need to write more than 3 lines of description, create subsections.

    ** NOTICE: ** put here a message that is very relevant to users of the project, if any.

    Features

    Here you can place screenshots of the project. Also describe your features using a list:

    • Easy integration;
    • Few dependencies;
    • Beautiful template-english with a nice README;
    • Great documentation and testing?

    Getting started

    1. First step to get started

    Usually the first step to get started is to install dependencies to run the project. Run:

    apt get install dependency
    

    It is recommended to place each command on a different line:

    apt get install something else
    

    This way users can copy and paste without reading the documentation (which is what usually happens).

    2. Other step(s)

    Usually the next steps teach you how to install and configure the project for use / development. Run:

    git clone https://github.com/ccuffs/template-english template-english
    

    Contribute

    Your help is most welcome regardless of form! Check out the CONTRIBUTING.md file for all ways you can contribute to the project. For example, suggest a new feature, report a problem/bug, submit a pull request, or simply use the project and comment your experience. You are encourage to participate as much as possible, but stay tuned to the code of conduct before making any interaction with other community members.

    See the ROADMAP.md file for an idea of how the project should evolve.

    License

    This project is licensed under the MIT open-source license and is available for free.

    Changelog

    See all changes to this project in the CHANGELOG.md file.

    Similar projects

    Below is a list of interesting links and similar projects:

    Visit original content creator repository https://github.com/ccuffs/tccr
  • redmine-time-tracking

    Package Version License GitHub Release Date GitHub issues

    Chrome Web Store Version Chrome Web Store Users Chrome Web Store Rating

    Mozilla Add-on Version Mozilla Add-on Users Mozilla Add-on Rating

    Redmine Time Tracking (Chrome Extension / Firefox Extension)

    Start-stop timer for Redmine.

    Install-Button-Chrome Install-Button-Firefox

    Features

    • View all your assigned Redmine issues grouped by projects
    • Filter issues by projects
    • Group issues by target version
    • Search for issues (press CTRL + K or CTRL + F)
    • Start, stop and edit the timer for your current tasks
    • Create entry for time spent (and for multiple users at once)
    • Update done ratio for issues
    • Pin and unpin issues (display at the top of the project)
    • Remember and forget issue (not assigned to you)
    • View time entries for current and last week
    • Multiple languages
    • Dark & light mode (system default)

    Requirements

    At least Redmine version 3.0 or higher required. Recommended version 5.0 or higher.

    Unsupported features by Redmine versions

    Feature Unsupported Redmine version
    Show only enabled issue field for selected tracker when creating new issues < 5.0.0
    Show only allowed statuses when updating issue < 5.0.0
    Show spent vs estimated hours < 5.0.0
    Select the default fixed version when creating new issues < 4.1.1
    Check permissions for admin users who are not members of a project < 4.0.0
    Display project-available time entry activities when adding spent time entries < 3.4.0
    Extended search < 3.3.0

    Tested with Google Chrome Version 130 and Firefox 132

    Supported languages

    If you want to add more languages or extend existing ones, feel free to contribute. Just create a pull request with the desired changes. The language files are located under src/lang and public/_locales.

    Screenshots

    issues issues-time settings issues-search issues-add-spent-time issues-context-menu

    Credits

    Logo is Copyright (C) 2009 Martin Herr and is licensed under Creative Commons (https://www.redmine.org/projects/redmine/wiki/logo)

    Visit original content creator repository https://github.com/CrawlerCode/redmine-time-tracking
  • elastic-airflow-cluster-k8s-setup-manifests

    elastic-airflow-cluster-k8s-setup-manifests

    This repo contains dockerfiles/yaml for airflow cluster components.

    Follow blog on this – https://medium.com/@sarweshsuman.1/elastic-autoscaling-airflow-cluster-in-kubernetes-14c16c73cac9

    setting up for demo on mac

    • setup minikube

      Install minikube

      https://kubernetes.io/docs/tasks/tools/install-minikube/
      

      Make sure minikube vm has enough cpu and memory to run several pods.

      minikube start --cpus 4 --memory 10240
      eval $(minikube docker-env)
      

      Important:
      Create below directories within minikube vm, by logging into the vm.
      These directories are for dags/logs folder and we will be mounting these into pods for sharing & persistence.

      minikube ssh
      sudo mkdir dags
      sudo mkdir logs
      sudo chmod -R 777 dags
      sudo chmod -R 777 logs
      logout
      

      This has to be done every time you restart minikube.
      You can avoid this by creating a mount from your local mac to above path in minikube vm.
      For demo, i am sticking with this manual step.

    • Setup dependent repo

      This repo has ElasticWorker/ElasticWorkerAutoscaler CRD and controllers code.

      git clone https://github.com/sarweshsuman/elastic-worker-autoscaler.git
      cd elastic-worker-autoscaler/
      make
      make install
      make docker-build IMG=elastic-worker-controllers:0.1
      make deploy IMG=elastic-worker-controllers:0.1  
      

      This compiles the controller code and builds image and deploys into minikube cluster namespace elastic-worker-autoscaler-system.

      Validate pod is up and fine.

      kubectl get pods -n elastic-worker-autoscaler-system
      
    • Setup custom metric APIserver adapter

      This repo has custom metric adapter code which works closely with ElasticWorkerAutoscaler controller.

      This setup can be replaced with Prometheus setup if moving into production.

      git clone https://github.com/sarweshsuman/elastic-worker-custommetrics-adapter.git
      cd elastic-worker-custommetrics-adapter/
      GOOS=linux go build -o docker/
      cd docker/
      docker build -t elasticworker-custommetric-adapter:0.1 .
      

      This builds custom metric adapter and creates a docker image.
      Now we will deploy it into minikube.

      cd ../manifest
      kubectl create -f redis-metric-db.yaml
      kubectl create -f elasticworker-adapter.yaml
      

      Validate all pods are up and fine.

      kubectl get pods -n elasticworker-custommetrics
      

      Goto https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-metrics-apis for more info on how to setup custom metrics.

    • Clone this repo.

      git clone https://github.com/sarweshsuman/elastic-airflow-cluster-k8s-setup-manifests.git
      cd elastic-airflow-cluster-k8s-setup-manifest/
      
    • Build rest of the components images

      Build airflow image, will be common for scheduler, worker, flower.

      cd docker-airflow/
      docker build -t airflow-slim:0.1 .
      

      Build postgres image

      cd ../docker-postgres/
      docker build -t airflow-postgres:0.1 .
      

      Build rabbitmq image

      cd ../docker-rabbitmq/
      docker build -t airflow-rabbitmq:0.1 .
      

      All needed images are now built.

      Validate all images are created.

      docker image ls
      
    • Deploy into minikube

      Wait 5 seconds before running subsequent command, to avoid putting all load at once on your mac.

      cd ..
      kubectl create -f airflow-rabbitmq.yaml
      kubectl create -f airflow-postgres.yaml
      kubectl create -f airflow-scheduler.yaml
      kubectl create -f airflow-flower.yaml
      kubectl create -f elasticcluster-worker.yaml
      kubectl create -f elasticcluster-autoscaler.yaml
      

      Validate all pods are up and running fine.

      kubectl get pods
      
    • Test the cluster with a DAG.

      To test we will have to first login into minikube vm and create a DAG file.

      Sample dag is at – https://github.com/sarweshsuman/elastic-airflow-cluster-k8s-setup-manifests/tree/master/sample-dags

      minikube ssh
      cd dags
      cat>dag_1.py
      ....PASTE CONTENT FROM SAMPLE DAG....
      ctrl-d
      logout
      

      Trigger the DAG from within scheduler pod.

      kubectl get pods
      kubectl exec -it airflow-scheduler-76d5df7b9b-948k2 bash
      >cd dags/
      >airflow unpause dag_1
      >airflow trigger_dag dag_1
      >logout
      

      This will trigger the DAG. If everything is fine, worker will execute it.

      Note, you might see error in unpausing the dag, it is because scheduler has not yet picked the dag yet. If you retry this issue will go away. Alternatively, you could use -sd mention the subdirectory manully.

      You can open the flower UI for checking the status of the cluster.

      kubectl cluster-info
      

    Visit original content creator repository
    https://github.com/sarweshsuman/elastic-airflow-cluster-k8s-setup-manifests

  • tp5-rbac

    tp5-rbac

    本扩展包是tp5的rbac包,使用了部分tp5的特性实现了关系型数据库中特殊数据结构的处理。

    安装方法

    先安装composer如果不知道怎么安装使用composer请自行百度。
    打开命令行工具切换到你的tp5项目根目录

    composer require gmars/tp5-rbac
    

    如果该方法报错请按照以下方式操作:

    1. 打开项目根目录下的composer.json
    2. 在require中添加”gmars/tp5-rbac”: “dev-master”
    3. 运行composer update

    添加后composer.json应该有这样的部分:

        "require": {
            "php": ">=5.4.0",
            "topthink/framework": "^5.0",
            "gmars/tp5-rbac": "dev-master"
        },
    

    v2.0+版本使用说明

    配置

    请将此配置加在config/app.php的配置中

    'rbac' => [
        'type' => 'jwt',    //验证方式 jwt(token方式)形式或者service(基于cookie)方式
        'db' => '',        //rbac要使用的数据库配置为空则为默认库(生成表的前缀依赖此配置)
        'salt_token' => 'asdfasfdafasf',    //token加密密钥
        'token_key' => 'Authorization'      //header中用于验证token的名称
    ]

    使用说明

    实例化rbac

    $rbac = new Rbac();

    管理操作

    初始化rbac所需的表

    //可传入参数$db为数据库配置项默认为空则为默认数据库(考虑到多库的情形)
    $rbac->createTable();

    该方法会生成rbac所需要的表,一般只执行一次,为了安全,执行后会加锁,下次要执行需要删除锁文件再执行。

    创建权限分组

    $rbac->savePermissionCategory([
        'name' => '用户管理组',
        'description' => '网站用户的管理',
        'status' => 1
    ]);

    编辑和修改调用同一个方法编辑时请在参数中包含主键id的值

    创建权限节点

    $rbac->createPermission([
        'name' => '文章列表查询',
        'description' => '文章列表查询',
        'status' => 1,
        'type' => 1,
        'category_id' => 1,
        'path' => 'article/content/list',
    ]);
    • 如果为修改则在传入参数数组中加入主键id的键值
    • type为权限类型1为后端权限2为前端权限主要考虑到spa使用
    • category_id为上一步创建的权限分组的id
    • 创建成功返回添加的该条权限数据,错误抛出异常

    创建角色&给角色分配权限

    $rbac->createRole([
        'name' => '内容管理员',
        'description' => '负责网站内容管理',
        'status' => 1
    ], '1,2,3');
    • 如果修改请在第一个参数中传入主键的键值
    • 第二个参数为权限节点的id拼接的字符串请使用英文逗号

    给用户分配角色

    $rbac->assignUserRole(1, [1]);
    • 该方法会删除用户之前被分配的角色
    • 第一个参数为用户id
    • 第二个参数为角色id的数组

    获取权限分组列表

    $rbac->getPermissionCategory([['status', '=', 1]]);
    • 参数支持传入id查询单条数据和标准的where表达式查询列表传为空数组则查询所有

    获取权限列表

    $rbac->getPermission([['status', '=', 1]]);
    • 参数支持传入id查询单条数据和标准的where表达式查询列表传为空数组则查询所有

    获取角色列表

    $rbac->getRole([], true);
    • 第一个参数支持传入id查询单条数据和标准的where表达式查询列表传为空数组则查询所有
    • 第二个参数选择是否查询角色分配的所有权限id默认为true

    删除权限分组

    $rbac->delPermissionCategory([1,2,3,4]);
    • 参数支持传入单个id或者id列表

    删除权限

    $rbac->delPermission([1,2,3,4]);
    • 参数支持传入单个id或者id列表

    删除角色

    $rbac->delRole([1,2,3,4]);
    • 参数支持传入单个id或者id列表
    • 删除角色会删除给角色分配的权限[关联关系]

    验证操作

    service方式

    service方式因为要用到session一般要依赖于cookie。在用户登录后要获取用户权限操作

    $rbac->cachePermission(1);
    • 传入参数为登录用户的user_id
    • 该方法会返回该用户所有的权限列表

    用户请求时进行验证

    $rbac->can('article/channel/list');
    • 如果有权限返回true如果没有权限返回false

    jwt方式

    jwt方式在前后端分离结构用的比较普遍。在用户登录后需要获取token

    $rbac->generateToken(1);
    • 第一个参数为登录的用户id
    • 第二个参数为token有效期默认为7200秒
    • 第三个参数为token前缀
      返回结果为

    array(3) {
      ["token"] => string(32) "4c56b80f06d3d8810b97db33a1291694"
      ["refresh_token"] => string(32) "17914241bde6bfc46b20e643b2c58279"
      ["expire"] => int(7200)
    }

    使用refresh_token刷新权限

    $rbac->refreshToken('17914241bde6bfc46b20e643b2c58279');

    请在有效期内使用refresh_token来刷新授权
    用户请求时验证

    $rbac->can('article/channel/list');

    < v2.0使用说明

    数据迁移(可选,可以直接使用包中的gmars_rbac.sql文件导入)

    在使用本插件之前需要有rbac锁需要的数据库。在迁移之前如果你的数据库中已有user数据表那么请你备份自己的user数据表后删除。

    在你的项目的某个config.php中加入如下配置:

    'migration' => [
        'path' => ROOT_PATH .'vendor/gmars/tp5-rbac/'
    ],

    然后把命令行切换到你的项目根目录Windows是cmd运行如下命令

    php think migrate:run

    如果迁移运行成功会在你的数据库中生成如下几张表:

    user              用户表
    user_role         用户角色对应表
    role              角色表
    role_permission   角色权限对应表
    permission        角色表

    使用该插件–RBAC的管理

    在一个系统中RBAC是基于角色的权限控制。作为开发人员需要明白这是两个不同的过程。第一个就是构建系统的RBAC结构,包括添加权限,角色,用户,用户角色对应关系,角色权限对应关系等。

    在此先说明RBAC管理:

    1.添加用户

    这一步是在用户注册时要做的一步,就是讲注册的用户添加到user表中。

    $rbacObj = new Rbac();
    $data = ['user_name' => 'zhangsan', 'status' => 1, 'password' => md5('zhangsan')];
    $rbacObj->createUser($data);

    创建用户时传入唯一一个参数必须是数组。数组中应该包含用户表需要的数据。如果出现其他非user表的字段则会抛出异常。
    该方法返回的结果为false或者Exception或者新添加用户的id

    2.添加权限

    这一步是构建系统的权限。一般我们是以请求的路由为权限的识别标志。在该插件中使用path字段。

    例如我们的系统中有商品列表这样的一个操作需要授权。

    其路由为 /index/goods/list

    添加路由如下:

    $rbacObj = new Rbac();
    $data = [
        'name' => '商品列表',
        'status' => 1,
        'description' => '查看商品的所有列表',
        'path' => '/index/goods/list',
        'create_time' => time()
    ];
    $rbacObj->createPermission($data);

    3.添加角色

    在RBAC的角色中角色是有父子关系的,也就是说所添加的角色可以是另一个角色的子角色。

    $rbacObj = new Rbac();
    $data = [
        'name' => '商品管理员',
        'status' => 1,
        'description' => '商品管理员负责商品的查看修改删除等操作',
        'sort_num' => 10,
        'parent_id' => 1
    ];
    $rbacObj->createRole($data);

    需要注意的是在data中有个字段为parent_id这个字段标识了所要添加的角色的父角色。如果留为空则便是添加的父角色。

    4.为用户分配角色

    当然一个用户可以有多个角色。一般是使用多选框或其他形式选择后以数组的方式传入的。

    例如:

    $rbacObj = new Rbac();
    $rbacObj->assignUserRole(1, [1, 2]);

    assignUserRole($userId, array $roleArray = [])

    该方法的第一个参数为用户id第二个参数是一个一位数组,其元素为角色的id

    5.为角色分配权限

    例如:

    $rbacObj = new Rbac();
    $rbacObj->assignRolePermission(1, [1, 2]);

    将id分别为1,2的权限分配给id为1的角色

    6.删除角色

    删除角色的同时必须删除角色和权限的对应数据

    $rbacObj = new Rbac();
    $rbacObj->delRole(1);

    其中需要传入的是角色id

    7.将一个角色移到另一个角色下

    以上已经说明了角色是有父子关系的那么肯定能够移动其位置

    $rbacObj = new Rbac();
    $rbacObj->moveRole(1,3);

    该例子是将id为1的角色移动到id为3的角色下作为子角色。

    还有其他修改删除等方法的文档日后再补全,功能是有的

    使用该插件–RBAC权限验证

    登录后获取权限列表

    如果自己写权限验证则请忽略这一步,如果要使用rbac插件来验证权限则必须要这样做。

    在登录成功后做如下操作:

    $rbacObj = new Rbac();
    $rbacObj->cachePermission(1);

    这个方法是查询id为1的用户的所有权限并且以path索引后存入cache

    请求时的权限验证

    当然对于每一个方法都要进行权限验证时我们一般是在某一个父类中定义一个方法进行权限验证,验证如下:

    $rbacObj = new Rbac();
    $rbacObj->can('/index/goods/list');

    该方法是验证当前用户有没有操作/index/goods/list的权限,如果有则返回true如果无则返回false

    其中can的参数可以使用tp5的特性获取。

    Visit original content creator repository
    https://github.com/gmars/tp5-rbac

  • jill.py

    JILL.py

    The enhanced Python fork of JILL — Julia Installer for Linux (and every other platform) — Light

    py version version Actions Status codecov OSCS release-date 中文README

    Features

    • download Julia releases from the nearest mirror server
    • support all platforms and architectures
    • manage multiple julia releases
    • easy-to-use CLI tool

    asciicast

    Install JILL

    For the first time users of jill, you will need to install it using pip: pip install jill --user -U. Also use this to upgrade JILL version.

    Python >= 3.8 is required. For base docker images, you also need to make sure gnupg is installed.

    Installing Julias

    When you type jill install, it does the following things:

    1. query the latest version
    2. download, verify, and install julia
    3. make symlinks, e.g., julia, julia-1, julia-1.6

    For common Julia users:

    • Get the latest stable release: jill install
    • Get the latest 1.y.z release: jill install 1
    • Get the latest 1.6.z release: jill install 1.6
    • Get the specific version: jill install 1.6.2, jill install 1.7.0-beta3
    • Get the latest release (including unstable ones): jill install --unstable

    Note that for Julia 1.10, you’ll have to install it with jill install '"1.10"' because of the python-fire limit.

    For Julia developers and maintainers:

    • Get the nightly builds: jill install latest. This gives you julia-latest.
    • Checkout CI build artifacts of specific commit in the Julia Repository: jill install 1.8.0+cc4be25c (<major>.<minor>.<patch>+<build> with at least the first 7 characters of the hash). This gives you julia-dev.

    Some flags that can be useful:

    • No confirmation before installation: jill install --confirm
    • Download from Official source: jill install --upstream Official
    • Keep downloaded contents after installation: jill install --keep_downloads
    • Force a reinstallation: jill install --reinstall

    The symlinks

    To start Julia, you can use predefined JILL symlinks such as julia. jill install uses the following rule makes sure that you’re always using the latest stable release.

    Stable releases:

    • julia points to the latest Julia release.
    • julia-1 points to the latest 1.y.z Julia release.
    • julia-1.6 points to the latest 1.6.z Julia release.

    For unstable releases such as 1.7.0-beta3, installing it via jill install 1.7 --unstable or jill install 1.7.0-beta3 will only give you julia-1.7; it won’t make symlinks for julia or julia-1.

    To dance on edge:

    • julia-latest points to the nightly build from jill install latest
    • julia-dev points to the julia CI build artifacts from, for example, jill install 1.8.0+cc4be25c.

    List symlinks and their target versions

    jill list [version] gives you every symlinks and their target Julia versions.

    list

    Change symlink target

    For non-windows system, you are free to use ln command to change the symlink targets. For Windows it uses an entry .cmd file for this so you’ll need to copy them. In the meantime, jill switch provides a simple and unified way to do this:

    • jill switch 1.6: let julia points to the latest julia 1.6.z release.
    • jill switch <path/to/my/own/julia/executable>: let julia points to custom executables.
    • jill switch 1.6 --target julia-1: let julia-1 points to the latest julia 1.6.z release.

    About downloading upstreams

    By default, JILL tries to be smart and will download contents from the nearest upstream. You can get the information of all upstreams via jill upstream. Here’s what I get in my laptop, I live in China so the official upstreams aren’t so accessible for me 🙁

    upstream

    To temporarily disable this feature, you can use flag --upstream <server_name>. For instance, jill install --upstream Official will faithfully download from the official julialang s3 bucket.

    To permanently disable this feature, you can set environment variable JILL_UPSTREAM.

    Note that flag is of higher priority than environment variable. For example, if JILL_UPSTREAM is set to mirror server "TUNA", you can still download from the official source via jill install --upstream Official.

    About installation and symlink directories

    Here’s the default JILL installation and symlink directories:

    system installation directory symlink directory
    macOS /Applications ~/.local/bin
    Linux/FreeBSD ~/packages/julias ~/.local/bin
    Windows ~\AppData\Local\julias ~\AppData\Local\julias\bin

    For example, on Linux jill install 1.6.2 will have a julia folder in ~/packages/julias/julia-1.6 and symlinks julia/julia-1/julia-1.6 created in ~/.local/bin.

    Particularly, if you’re using jill as root user, you will do a system-wide installation:

    • Installation directory will be /opt/julias for Linux/FreeBSD.
    • Symlink directory will be /usr/local/bin for Linux/FreeBSD/macOS.

    To change the default JILL installation and symlink directories, you can set environment variables JILL_INSTALL_DIR and JILL_SYMLINK_DIR.

    (Deprecated) jill install also provides two flag --install_dir <dirpath> and --symlink_dir <dirpath>, they have higher priority than the environment variables JILL_INSTALL_DIR and JILL_SYMLINK_DIR.

    JILL environment variables

    jill is made as a convenient tool and it can sometimes be annoying passing flags to it. There are some predefined environment variables that you can use to set the default values:

    • Specify a default downloading upstream JILL_UPSTREAM: --upstream
    • Override default symlink directory JILL_SYMLINK_DIR: --symlink_dir
    • Override default installation directory JILL_INSTALL_DIR: --install_dir

    The flag version has higher priority than the environment variable version.


    Advanced: Example with cron

    If you’re tired of seeing (xx days old master) in your nightly build version, then jill can make your nightly build always the latest version using cron:

    # /etc/cron.d/jill
    PATH=/usr/local/bin:/usr/sbin:/usr/sbin:/usr/bin:/sbin:/bin
    
    # install a fresh nightly build every day
    * 0 * * * root jill install latest --confirm --upstream Official

    Advanced: Registering a new public releases upstream

    If it’s an public mirror and you want to share it worldwide to other users of JILL. You can add an entry to the public registry, make a PR, then I will tag a new release for that.

    Please check the sources.json format for more detailed information on the format.

    Advanced: Specifying custom (private) downloading upstream

    To add new private upstream, you can create a file ~/.config/jill/sources.json (fow Windows it is ~/AppData/Local/julias/sources.json) and add your own upstream configuration just like the JILL sources.json does. Once this is done JILL will recognize this new upstream entry.

    Please check the sources.json format for more detailed information on the format.

    Advanced: The Python API

    jill.py also provides a set of Python API:

    from jill.install import install_julia
    from jill.download import download_package
    
    # equivalent to `jill install --confirm`
    install_julia(confirm=True)
    # equivalent to `jill download`
    download_package()

    You can read its docstring (e.g., ?install_julia) for more information.

    FAQs

    Why you should use JILL?

    Distro package managers (e.g., apt, pac) is likely to provide a broken Julia with incorrect binary dependencies (e.g., LLVM ) versions. Hence it’s recommended to download and extract the Julia binary provided in Julia Downloads. jill.py doesn’t do anything magical, but just makes such operation even stupid.

    Why I make the python fork of JILL?

    At first I found myself needing a simple tool to download and install Julia on my macbook and servers in our lab, I made my own shell scripts and I’d like to share it with others. Then I found the jill.sh project, Abel knows a lot shell so I decide to contribute my macOS Julia installer to jill.sh.

    There are three main reasons for why I decided to start my Python fork:

    • I live in China. Downloading resources from GitHub and AWS s3 buckets is a painful experience. Thus I want to support downloading from mirror servers. Adding mirror server support to jill.sh is quite complicated and can easily become a maintenance nightmare.
    • I want to make a cross platform installer that everyone can use, not just Linux/macOS users. Shell scripts doesn’t allow this as far as I can tell. In contrast, Python allows this.
    • Most importantly, back to when I start this project, I knew very little shell, I knew nothing about C/C++/Rust/Go and whatever you think a good solution is. I happen to knew a few Python.

    For some “obvious” reason, Julia People don’t like Python and I understand it. (I also don’t like Python after being advanced Julia user for more than 3 years) But to be honest, revisiting this project, I find using Python is one of the best-made decision during the entire project. Here is the reason: no matter how you enjoy Julia (or C++, Rust), Python is one of the best successful programming language for sever maintenance purpose. Users can easily found tons of “how-to” solutions about Python and it’s easy to write, deploy, and ship Python codes to the world via PyPI.

    And again, I live in China so I want to rely on services that are easily accessible in China, PyPI is, GitHub and AWS S3 bucket aren’t. A recent Julia installer project juliaup written in Rust solves the Python dependency problem very well, but the tradeoff is that juliaup needs its own distributing system (currently GitHub and S3 bucket) to make sure it can be reliably downloaded to user machine. And for this it just won’t be as good as PyPI in the foreseeable future.

    Is it safe to use jill.py?

    Yes, jill.py use GPG to check every tarballs after downloading. Also, *.dmg/*.pkg for macOS and .exe for Windows are already signed.

    What’s the difference between jill.sh and jill.py

    jill.sh is a shell script that works quite well on Linux x86/x64 machines. jill.py is an enhanced python package that focus on Julia installation and version management, and brings a unified user experience on all platforms.

    Why julia fails to start

    The symlink julia are stored in JILL predefined symlinks dir thus you have to make sure this folder is in PATH. Search “how to add folder to PATH on xxx system” you will get a lot of solutions.

    How do I use multiple patches releases (e.g., 1.6.1 and 1.6.2)

    Generally, you should not care about patch version differences so jill.py make it explicitly that only one of 1.6.x can exist. If you insist to have multiple patch versions, you could use jill install --install_dir <some_other_folder> to install Julia in other folder, and then manually make a symlink back. As I just said, in most cases, common users should not care about this patch version difference and should just use the latest patch release.

    How to only download contents without installation?

    Use jill download [version] [--sys <system>] [--arch <arch>]. Check jill download --help for more details.

    Linux with musl libc

    For Julia (>= 1.5.0) in Linux with musl libc, you can just do jill install and it gives you the right Julia binary. To download the musl libc binary using jill download, you will need to pass --sys musl flag.

    MacOS with Apple silicon (M1)

    Yes it’s supported. Because macOS ARM version is still of tier-3 support, jill.py will by default install the x86_64 version. If you want to use the ARM version, you can install it via jill install --preferred-arch arm64.

    CERTIFICATE_VERIFY_FAILED error

    If you’re confident, try jill install --bypass-ssl.

    Skip symbolic links generation

    If for some reason you prefer to download julia without generating symbolic links jill install --skip-symlinks

    Visit original content creator repository https://github.com/johnnychen94/jill.py
  • kWire

    kWire

    Extending Kotlin/Multiplatform with native programming capabilities.
    This library provides features including but not limited to:

    • Unmanaged memory API (Allocator, Memory and MemoryStack)
    • Foreign function API for calling native functions by address (FFI)
    • Shared library API for loading shared objects (SharedLibrary)
    • Native size types (NInt, NUInt and NFloat)
    • Native function types (CFn<F>)
    • Type safe pointers with constness (@Const and Ptr)
    • Calling convention modifiers for function pointers (@CDecl, @ThisCall, @StdCall and @FastCall)
    • Structure types (Struct)
    • Auto generated memory stack scopes (using MemoryStack)
    • Auto generated interop using @SharedImport, similar to DllImport in C#
    • Basic metaprogramming (@ValueType and typeOf<T>())
    • Function templates with monomorphization (@Template)
    • Memory access optimizations based on target platform
    • Function call optimizations based on target platform
    • Standalone ABI library for parsing and demangling kWire symbol data

    This library does not support JS/WASM targets, and there is no plans on supporting it in the future. If you know how to do it, feel free to contribute 🙂

    How it works

    Architecture Diagram

    ABI

    The ABI part of the library is shared between the runtime and compiler plugin.
    It implements shared type definitions and mechanisms for properly handling kWire symbols
    which are embedded in the module data of the module being compiled with the kWire compiler plugin.

    Runtime

    • On the JVM, the runtime implements/wraps around the Panama API available with Java 21+.
      This allows easy interaction with platform-specific JVM code and a lot of opportunity for optimizations which directly tie into the JIT compiler

    • On Android, the Panama API is not available out of the box.
      For this reason, kWire uses a port of Project Panama to Android to substitute the missing standard APIs

    • Special features like pinning on the JVM are implemented in the kWire Platform Binaries as handwritten JNI intrinsics, since Panama doesn’t offer any alternatives.

    • On native targets, kWire uses a custom implementation built in Kotlin/Native and using libffi for dispatching calls at runtime in an efficient manner, giving very acceptable performance to builtin C-function calls in Kotlin/Native

    Compiler Plugin

    The compiler plugin is mainly responsible for lowering code.
    This means transforming some higher-level concepts and calls into their actual implementation, which is usually directly emitted in Kotlin (F)IR.

    This allows kWire to implement features otherwise not possible due to limitations of the Kotlin compiler.

    Gradle Plugin

    The Gradle plugin simply exists to inject the compiler plugin into the Kotlin compiler (daemon),
    however it is planned to be extended with code generation capabilities similar to kotlinx.cinterop.

    Credits & Licenses

    Project Name License Author
    kotlinx.coroutines Apache-2.0 JetBrains
    kotlinx.serialization Apache-2.0 JetBrains
    kotlinx.io Apache-2.0 JetBrains
    AutoService Apache-2.0 Google
    Stately Apache-2.0 Touchlab
    LWJGL 3 BSD-3-Clause Ioannis Tsakpinis
    OSHI MIT Daniel Widdis
    PanamaPort GPL2 Vladimir Kozelkov
    libffi MIT Anthony Green
    ANTLR Kotlin Apache-2.0, BSD-3-Clause Strumenta

    Special thanks to everyone involved in providing the libraries and tools
    this project so heavily relies on, and for pouring countless hours of their time into these projects.

    Visit original content creator repository https://github.com/karmakrafts/Multiplatform-dlfcn
  • kWire

    kWire

    Extending Kotlin/Multiplatform with native programming capabilities.
    This library provides features including but not limited to:

    • Unmanaged memory API (Allocator, Memory and MemoryStack)
    • Foreign function API for calling native functions by address (FFI)
    • Shared library API for loading shared objects (SharedLibrary)
    • Native size types (NInt, NUInt and NFloat)
    • Native function types (CFn<F>)
    • Type safe pointers with constness (@Const and Ptr)
    • Calling convention modifiers for function pointers (@CDecl, @ThisCall, @StdCall and @FastCall)
    • Structure types (Struct)
    • Auto generated memory stack scopes (using MemoryStack)
    • Auto generated interop using @SharedImport, similar to DllImport in C#
    • Basic metaprogramming (@ValueType and typeOf<T>())
    • Function templates with monomorphization (@Template)
    • Memory access optimizations based on target platform
    • Function call optimizations based on target platform
    • Standalone ABI library for parsing and demangling kWire symbol data

    This library does not support JS/WASM targets, and there is no plans on supporting it in the future. If you know how to do it, feel free to contribute 🙂

    How it works

    Architecture Diagram

    ABI

    The ABI part of the library is shared between the runtime and compiler plugin.
    It implements shared type definitions and mechanisms for properly handling kWire symbols
    which are embedded in the module data of the module being compiled with the kWire compiler plugin.

    Runtime

    • On the JVM, the runtime implements/wraps around the Panama API available with Java 21+.
      This allows easy interaction with platform-specific JVM code and a lot of opportunity for optimizations which directly tie into the JIT compiler

    • On Android, the Panama API is not available out of the box.
      For this reason, kWire uses a port of Project Panama to Android to substitute the missing standard APIs

    • Special features like pinning on the JVM are implemented in the kWire Platform Binaries as handwritten JNI intrinsics, since Panama doesn’t offer any alternatives.

    • On native targets, kWire uses a custom implementation built in Kotlin/Native and using libffi for dispatching calls at runtime in an efficient manner, giving very acceptable performance to builtin C-function calls in Kotlin/Native

    Compiler Plugin

    The compiler plugin is mainly responsible for lowering code.
    This means transforming some higher-level concepts and calls into their actual implementation, which is usually directly emitted in Kotlin (F)IR.

    This allows kWire to implement features otherwise not possible due to limitations of the Kotlin compiler.

    Gradle Plugin

    The Gradle plugin simply exists to inject the compiler plugin into the Kotlin compiler (daemon),
    however it is planned to be extended with code generation capabilities similar to kotlinx.cinterop.

    Credits & Licenses

    Project Name License Author
    kotlinx.coroutines Apache-2.0 JetBrains
    kotlinx.serialization Apache-2.0 JetBrains
    kotlinx.io Apache-2.0 JetBrains
    AutoService Apache-2.0 Google
    Stately Apache-2.0 Touchlab
    LWJGL 3 BSD-3-Clause Ioannis Tsakpinis
    OSHI MIT Daniel Widdis
    PanamaPort GPL2 Vladimir Kozelkov
    libffi MIT Anthony Green
    ANTLR Kotlin Apache-2.0, BSD-3-Clause Strumenta

    Special thanks to everyone involved in providing the libraries and tools
    this project so heavily relies on, and for pouring countless hours of their time into these projects.

    Visit original content creator repository https://github.com/karmakrafts/Multiplatform-dlfcn
  • knapsack-problem

    🎒 KnapsackProblem

    Knapsack problem solution project for PJC.

    📚 Popis:

    Projekt je řešením klasického problému s batohem, ve kterém každź item má 2 parametry – cenu a váhu a batoh má omezenou kapacitu.
    Úkolem je najít optimální řešení, při kterém předměty s maximální hodnotou zůstanou v batohu, s přihlédnutím ke kapacitě aktovky.

    📚 Implementace:

    Tento program řeší problém pomocí metody dynamického programování.

    Řešení je připraveno pouze pro variantu úlohy 0-1, což znamená, že každá položka se může do portfolia dostat pouze 1x nebo vůbec.
    Podstatou řešení je sestavení dvourozměrného pole (tabulky), které zohledňuje počet položek a kapacitu batohu, díky čemuž máme pomocí cyklů možnost rozdělit úkol na menší dílčí úkoly a vypočítat různé konfigurace pro batohy o velikostech od 1 do N.
    Poté, co věci hodnotíme podle jejich hodnoty, určíme, která věc je pro dané místo v buňce nejlepší.
    Po vyplnění tabulky bude naše odpověď v úplně poslední buňce tabulky.

    📚 Funkce:

    • –help = Vypiše všechny možné příkazy, které je možné použit v programu.
    • –run = Spuštění programu pomocí konfigurací ve formátu json. Je povoleno používat jak předpřipravené programy, tak své vlastní, umístěné před tím ve složce projektu.
    • –solve = Spuštění programu pomocí zadaní proměnných, včetně počtu objektů, jejich názvu, ceny, hmotnosti a kapacity batohu.
    • –tests = Spuštění připravených testů pro různé případy toku programu.

    📚 Input format:

    V případě –solution (příklad):

    3 = počet itemů
    Necklace 4 4000 = 1.item (název, vaha, cena)
    Ring 1 2500 = 2.item (název, vaha, cena)
    Pendent 3 2000 = 3.item (název, vaha, cena)
    4 = knapsack size

    📚 Měření:

    Program používá 1 vlákno.
    Bohužel se mi nepodařilo rozdělit dvourozměrné pole do více vláken, ale v jiných typech implementace je to docela možné.
    Měření byla provedena na: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz 2.59 GHz 6 Core(s), 12 threads. CLion.

    items count knapsack size result(ms)
    10 10 0
    100 1000 3
    250 10000 9

    📚 Knihovny:

    V projectu byli použity knihovny (ve složce /lib):

    • jsoncpp = pro čtení configurace batohu a itemu ve formatu json,
    • googletest = pro testovací metody ve složce /Tests – jsou tam unit testy pro různé případy toku programu.

    Visit original content creator repository
    https://github.com/noamorii/knapsack-problem