<![CDATA[William Sena]]>https://willsena.dev/https://willsena.dev/favicon.pngWilliam Senahttps://willsena.dev/Ghost 5.84Tue, 12 Nov 2024 21:25:18 GMT60<![CDATA[Como o Gitleaks pode evitar o vazamento de segredos em seu repositório git]]>https://willsena.dev/como-o-gitleaks-pode-evitar-o-vazamento-de-segredos-em-seu-repositorio-git/6731209ffa8448000a3fd3b2Tue, 12 Nov 2024 13:18:13 GMTO que é?Como o Gitleaks pode evitar o vazamento de segredos em seu repositório git

Há algum tempo, os repositórios deixaram de ser apenas depósitos de código e assumiram a função de centralizar a documentação, o fluxo de testes e a publicação em produção, entre outras obrigações. Com todas essas obrigações, também temos a responsabilidade de proteger valores importantes do nosso repositório, como a senha do banco de dados, a chave de autenticação e até mesmo a chave usada apenas para staging, que pode parecer inofensiva. 🐱 🦁

Essa não é uma prática recomendada, conforme mencionado em 12 Fatores, onde se enfatiza que as configurações devem ser mantidas em variáveis de ambiente e que uma aplicação deve ser projetada para funcionar em qualquer ambiente. No entanto, reconhecemos que, sem uma verificação periódica, pode ocorrer um vazamento não intencional. Subimos aquele arquivo com alguma referência de produção relevante ou não. Às vezes, isso acontece de maneira proposital 🥺, pois é mais fácil e rápido subir aquela variável diretamente em um arquivo de configuração ou Containerfile 🐳.

Como podemos prevenir?

Este artigo serve como uma iniciativa para a prevenção de segredos em repositórios, estruturando um projeto desde o inicio para prevenir falhas básicas de segurança. Lamentavelmente, vivemos em um mundo onde qualquer falha pode resultar em danos para indivíduos ou empresas, proporcionando lucro fácil para criminosos 🥷🏻💸🤑💰.

O Gitleaks é uma solução viável e de código aberto ❤️ que permite detectar vazamentos em nossos repositórios git.

Como funciona?

O Gitleaks é uma ferramenta que a cada execução avalia a nível de diretório ou de histórico com base no git, verificando se existem vazamentos no repositório, estas duas formas são importantes, e podemos utilizar da seguinte forma.

  • Novas alterações podem ser analisadas no nível do repositório, onde podemos determinar se um novo código violou alguma regra ou se houve vazamento de dados.
  • Ao realizar a varredura por pipeline, é aconselhável analisar o repositório e seu histórico, pois as regras podem ser atualizadas e um novo vazamento pode ser detectado, ou durante a implementação do gitleaks, podem ser detectados apontamentos baseados no histórico.

Bora ☕ !

Como o Gitleaks pode evitar o vazamento de segredos em seu repositório git

.gitleaks.toml

Primeiramente, crie um arquivo chamado .gitleaks.toml, onde adicionaremos as regras do nosso scan gitleaks. É comum termos falsos positivos, por isso, a ferramenta oferece métodos para ignorar arquivos ou "matches" que possam provocar essa situação.

Aqui um exemplo:

[extend]
useDefault = true

[allowlist]
description = "global allow list"
paths = [
  '''gitleaks\.toml''',
  '''gitleaks-report\.json''',
  '''\.env$''',
  '''(.*?)(jpg|gif|doc)''',
]

[[rules]]
id="aws-access-key"
description = "AWS Access Key"
regex = '''AKIA[0-9A-Z]{16}'''
tags = ["key", "AWS"]

[[rules]]
id="aws-access-secret"
description = "AWS Secret Key"
regex = '''(?i)aws_secret_access_key\s*=\s*[A-Za-z0-9/+=]{40}'''
tags = ["key", "AWS"]
  • extend, indica que é uma extensão da configuração padrão.
  • allowlist, a lista de permissões estabelece regras permissivas, como, por exemplo os arquivos que não devem ser levados em conta no escaneamento.
  • rules, é possível estabelecer várias regras para detecção de vazamento de dados sensíveis.

Pronto, esta foi uma abordagem sucinta à configuração, onde podemos encontrar exemplos mais detalhados diretamente na documentação do Gitleaks, na seção "Configurações".

Arquivo com dados sensíveis

.env.sample

MY_WEAK_PASSWORD=X
MY_STRONG_PASSWORD=QJJ0S81ogYX5iJebUM4LN1FOFFuQKo0B
AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

Executando em um diretório

Com a configuração concluída e um arquivo sensível em mãos, podemos iniciar o escaneamento.

docker run --rm -v $(pwd):/repo \
    zricethezav/gitleaks:latest \
    dir /repo \
    --gitleaks-ignore-path .gitleaksignore \
    --config /repo/.gitleaks.toml \
    --report-format json \
    --report-path /repo/gitleaks-report.json \
    -v

O resultado esperado deve identificar vazamentos.

Finding:     MY_STRONG_PASSWORD=QJJ0S81ogYX5iJebUM4LN1FOFFuQKo0B
Secret:      QJJ0S81ogYX5iJebUM4LN1FOFFuQKo0B
RuleID:      generic-api-key
Entropy:     4.452819
File:        /repo/.env.sample
Line:        2
Fingerprint: /repo/.env.sample:generic-api-key:2

12:03PM INF scan completed in 2.19ms
12:03PM WRN leaks found: 3

Executando em um repositório git

Anteriormente, utilizamos o modo diretório, que não leva em conta os históricos do git. O comando a seguir examina todo o histórico do repositório. Não aconselho a execução deste em um pre-commit devido à performance e desnecessário analisar o histórico a cada alteração. Este procedimento é apropriado para um pipeline de execução.

docker run --rm -v $(pwd):/repo \
    zricethezav/gitleaks:latest \
    detect --source /repo \
    --config /repo/.gitleaks.toml \
    --report-format json \
    --report-path /repo/gitleaks-report.json \
    -v

Como corrigir?

É necessário determinar se o vazamento é significativo. Se for, ele deve ser removido do repositório. Se for detectado é aconselhável redefinir a senha ou a chave. Podemos também realizar reflog, rebase no repositório para eliminar completamente o vazamento. Também podemos usar o .gitleaksignore para ignorar commits ou até mesmo ignorar dados não sensíveis que foram detectados no escaneamento.

Neste cenário, presumimos que este apontamento já foi enviado ao repositório anteriormente e desejamos ignorar o mesmo. Para isso crie um arquivo chamado .gitleaksignore.

SEU_COMMIT_HASH_AQUI:.env.sample:generic-api-key:2

Extras

Make

Adotei essa prática em meus projetos há algum tempo, uma herança do que aprendi em Golang 🦫. O Makefile é um arquivo utilizado por um utilitário chamado make, um instrumento de automação de compilação e gestão de dependências, amplamente utilizado em projetos de software, particularmente em linguagens como C e C++. Além disso, pode ser aplicado em outras tarefas de automação. Se precisamos executar comandos shell, o Makefile nos ajuda a organizar e centralizar a execução dos nossos scripts.

# Define the shell for the make process
SHELL := /bin/bash

REPO_PATH := $(PWD)
GITLEAKS_IMAGE := zricethezav/gitleaks:latest
GITLEAKS_CONFIG := $(REPO_PATH)/.gitleaks.toml
GITLEAKS_REPORT := $(REPO_PATH)/gitleaks-report.json

pre-commit: leaks

leaks-history:
	docker run --rm \
    -v $(REPO_PATH):/repo \
    $(GITLEAKS_IMAGE) \
    detect --source /repo \
    --config /repo/.gitleaks.toml \
    --report-format json \
    --report-path /repo/gitleaks-report.json \
    -v

leaks:
	docker run --rm \
    -v $(REPO_PATH):/repo \
    $(GITLEAKS_IMAGE) \
    dir /repo \
    --config /repo/.gitleaks.toml \
    -v

leaks-report:
	docker run --rm \
    -v $(REPO_PATH):/repo \
    $(GITLEAKS_IMAGE) \
    dir /repo \
    --config /repo/.gitleaks.toml \
    --report-format json \
    --report-path /repo/gitleaks-report.json \
    -v

help:
	@echo "Makefile Commands:"
	@echo "  pre-commit        Run gitleaks-history check before commit"
	@echo "  leaks-history    Run gitleaks history detection on the repository"
	@echo "  leaks            Run gitleaks detection on the repository directory"
	@echo "  leaks-report     Run gitleaks with a report on the repository directory"
	@echo "  help             Show this help message"

%:
	@echo "Unknown target '$@'. Use 'make help' to see available commands."
	@$(MAKE) help

```

Github Actions

O Gitleaks tem um action oficial gratuito para projetos. Projetos abertos não necessitam de registro para utilizar o pipeline, enquanto empresas necessitam desse registro. Somente o uso do pipeline otimizado requer registro, no entanto, o projeto é de código aberto e você pode executar suas ações através do docker diretamente em uma Action, bem como localmente, se necessário.

Segue um exemplo de Github Actions utilizando o padrão disponibilizado pelo Gitleaks:

name: Gitleaks Scan
run-name: Gitleaks Scan in [${{ github.ref_name }}] @${{ github.actor }}

on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main

jobs:
  gitleaks:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Run Gitleaks
        uses: gitleaks/gitleaks-action@v2
        with:
          github_token: ${{ secrets.GITHUB_TOKEN }} 
          args: detect --report-format json --report-path gitleaks-report.json

      - name: Upload Gitleaks report
        uses: actions/upload-artifact@v3
        with:
          name: gitleaks-report
          path: gitleaks-report.json

Pre-commit

Prevenir vazamentos é crucial, portanto, um hook pre-commit auxilia na identificação de vazamentos antes da alteração chegar ao repositório a cada "commit".

  • pre-commit
#!/bin/sh

make pre-commit

RESULT=$?

if [ $RESULT -ne 0 ]; then
  echo "Pre-commit checks failed. Commit aborted."
  exit 1
fi

exit 0

O comando a seguir estabelece um gatilho que permite a realização de uma varredura a cada atualização de código, conhecida como "commit".

ln -sf $(pwd)/pre-commit $(pwd)/.git/hooks/pre-commit

Repositório

Como sempre, a implementação está no meu repositório do Github, que você pode baixar e testar com o Gitleaks. Para mais informações, consulte o README do projeto.

GitHub - williampsena/gitleaks-recipes: This repository provides examples of using Gitleaks.
This repository provides examples of using Gitleaks. - williampsena/gitleaks-recipes
Como o Gitleaks pode evitar o vazamento de segredos em seu repositório git

E isso é tudo pessoal!

No repositório você pode validar a execução dos Github Actions e entender um pouco mais na prática como tornar seu código seguro. Espero que tenha entregue o meu tesouro e que esta ajuda tenha sido útil e edificadora, mantendo seu kernel 🧠 atualizado!

Mateus 6:21-23 Pois, onde estiver o seu tesouro, ali também estará o seu coração. ― Os olhos são a lâmpada do corpo. Portanto, se os seus olhos forem bons, todo o seu corpo estará cheio de luz. Mas, se os seus olhos forem maus, todo o seu corpo estará cheio de trevas. Portanto, se a luz que está dentro de você são trevas, quão grandes trevas são!

Referências

]]>
<![CDATA[Golang: How to Test Code That Exits or Crashes?]]>https://willsena.dev/golang-how-to-test-code-that-exits-or-crashes/6681e4b46b6f12000a458de2Wed, 03 Jul 2024 23:57:50 GMT

Today I'll go over how to write "exitable/crasheable" tests in the Go language. This question emerged when I attempted to write my tests in the same way that I did in other languages. Before I show you how I develop some tests, I'd want to share some useful code design tips.

Go error handing ❌

My first language experience was procedural, and then I moved on to object-oriented programming, so on the Go side, we have rules to handle errors in order to avoid problems during pipeline execution and testing; thus, with my limited Go skills, I would argue that you should not use panic functions at all or log.Fatal, which results in an os.exit since you are interrupting the execution of your program, and you may not want this behavior to be shared across all of your packages.

Assume you're consuming a package that could crash, and your system isn't expecting it, so every crash means a worker, server, or pod is offline. If we are working with a http server, it will most likely send an internal server error for this piece of code is isolated. Of course, you can handle issues at the error entry point, but I believe this is not the best solution because it adds 🦨 smell to your code. This is equivalent to failing to separate your ♻️ recyclable garbage.

I understand that return error for all functions can be difficult to handle code; at Elixir, we have pattern matching to deal with error verbosity, but again, return error is much better than panic or interrupting; explicit is better than implicit; for a long time, I used to prefer less magic than the past; I used to love the way Rails, Django, ASP.NET, and Spring does things, but after lost nights 💤 and much coffee ☕, I choose simplicity at some places of code design.

Writing a calculator

Let's develop a calculator 🧮 using the panic and exit functions to show how program interruption can be troubling during testing.

  • main.go
package main

import (
	"log"
	"os"
	"strconv"
)

// This function is a minimal implementation of the calculator
func calculate(args []string) float64 {
	if len(args) < 3 {
		panic("invalid arguments")
	}

	x, err := strconv.Atoi(args[0])

	if err != nil {
		panic(err)
	}

	y, err := strconv.Atoi(args[2])

	if err != nil {
		panic(err)
	}

	var r float64

	switch args[1] {
	case "+":
		r = float64(x + y)
	case "-":
		r = float64(x - y)
	case "x":
		r = float64(x * y)
	case "/":
		r = float64(x / y)
	default:
		log.Fatal("invalid operation")
	}

	return r
}

func main() {
	args := os.Args[1:]

	r := calculate(args)

	log.Printf("🟰  %.2f\n", r)
}
  • Panic functions are used when numbers cannot be parsed or the required arguments are less than three in length.
  • When an operation does not exist, the program sends a log.Fatal that causes os.exit.

Then let's run the application.

# installing deps
go mod download

go run main.go 2 - 9
# 🟰 -7

go run main.go 5 + 2
# 🟰 7

go run main.go 7 x 7
# 🟰 49

go run main.go 49 / 7
# 🟰  7.00

Before writing tests, we need some functions to support them.

  • test.go
package main

import (
	"bytes"
	"fmt"
	"os"
	"os/exec"
	"testing"
)

// Run a fork test that may crash using os.exit.
func RunForkTest(t *testing.T, testName string) (string, string, error) {
	cmd := exec.Command(os.Args[0], fmt.Sprintf("-test.run=%v", testName))
	cmd.Env = append(os.Environ(), "FORK=1")

	var stdoutB, stderrB bytes.Buffer
	cmd.Stdout = &stdoutB
	cmd.Stderr = &stderrB

	err := cmd.Run()

	return stdoutB.String(), stderrB.String(), err
}
  • The RunForkTests function runs a specified test in a fork process and allows you to assert stdout and stderr.

So right now is the time for coding tests

Golang: How to Test Code That Exits or Crashes?


💡Okay, we've reached the main goal: create crashable tests. The most common return execution status is frequently 0 for success and 1 for error. This status indicates whether the test was successful or failed.

To avoid test crashes caused by panic or os.exit, tests will run in a fork process. When a crash happens, the fork process terminates and the main progress matches the progress status and the related stdout and stderr files.

  • main_test.go
package main

import (
	"bytes"
	"log"
	"os"
	"testing"

	"github.com/stretchr/testify/assert"
)

func TestCalculateSum(t *testing.T) {
	r := calculate([]string{"5", "+", "5"})

	assert.Equal(t, float64(10), r)
}

func TestCalculateSub(t *testing.T) {
	r := calculate([]string{"5", "-", "15"})

	assert.Equal(t, float64(-10), r)
}

func TestCalculateMult(t *testing.T) {
	r := calculate([]string{"10", "x", "10"})

	assert.Equal(t, float64(100), r)
}

func TestCalculateDiv(t *testing.T) {
	r := calculate([]string{"100", "/", "10"})

	assert.Equal(t, float64(10), r)
}

func TestCalculateWithPanic(t *testing.T) {
	defer func() {
		err := recover().(error)
		if err != nil {
			assert.Contains(t, err.Error(), "parsing \"B\": invalid syntax")
		}
	}()

	calculate([]string{"10", "/", "B"})

	t.Errorf("😳 The panic function is not called.")
}

func TestMainWithPanicWithFork(t *testing.T) {
	if os.Getenv("FORK") == "1" {
		calculate([]string{"A", "/", "10"})
	}

	stdout, stderr, err := RunForkTest(t, "TestMainWithPanicWithFork")

	assert.Equal(t, err.Error(), "exit status 2")
	assert.Contains(t, stderr, "parsing \"A\": invalid syntax")
	assert.Contains(t, stdout, "FAIL")
}

func TestMainWithExit(t *testing.T) {
	oldStdout := os.Stdout
	oldArgs := os.Args

	var buf bytes.Buffer
	log.SetOutput(&buf)

	defer func() {
		os.Args = oldArgs
		os.Stdout = oldStdout
	}()

	os.Args = []string{"", "10", "x", "10"}
	main()

	log.SetOutput(os.Stderr)

	assert.Contains(t, buf.String(), "🟰  100.00")
}

This is the test execution 🔥.

Golang: How to Test Code That Exits or Crashes?
  • I employ the package gotestfmt to produce test experiences similar to those we have in other languages. Remember, we develop code for humans, therefore colors and emoji are extremely important 🙄.

Code

💡 Feel free to clone this repository, which contains related files:

go-recipes/crashable-tests at main · williampsena/go-recipes
Contribute to williampsena/go-recipes development by creating an account on GitHub.
Golang: How to Test Code That Exits or Crashes?

That's all folks

In this article, I describe how to handle tests that use panic or os.exit; however, I recommend you avoid using this behavior in all places of code; instead, prefer return error:

package main

type func calculate(args []string) (float64, error)

Let the main responsible to panic or exit the application and writing these types of tests can be difficult.

Please provide feedback so that we can keep our 🧠 kernel up to date.

I hope I can assist you with writing tests or resolving your worries about structuring your code for error handling, as well as remind you to stay focused on your goal:

🕊️ "Many are the plans in the mind of a man, but it is the purpose of the Lord that will stand" . Proverbs 19:21.
]]>
<![CDATA[Como gerar documentos do seu código em Go?]]>https://willsena.dev/como-gerar-documentos-do-seu-codigo-em-go/6666180fb61d0f0009a09b01Sun, 09 Jun 2024 21:45:26 GMT

Há algum tempo atrás, visando manter meu estilo generalista, mergulhei nos estudos de Go. Estava estudando, mas nunca tive a oportunidade de experimentar um projeto em produção para ajustar o treino ao jogo ⚽.

Durante essa jornada tive o prazer e o impacto de conhecer diferentes técnicas para resolver um problema. Sem dúvidas, me apeguei ao conceito da linguagem e decidi trocar uma aplicação pessoal que fiz em Elixir para Go. O objetivo deste artigo não é comparar 🫡, esse é um comentário para demonstrar o quanto cheguei à produtividade e maturidade que eu esperava.

Godoc

Deixando de lado as comparações! Assim como o Pyhton possui o pydoc, o Node.js possui o ESDoc. O Go também disponibiliza o pacote godoc para extração de documentação, o qual converte todos os comentários estruturados em uma versão HTML.

Sempre gostei dessa abordagem. Eu pessoalmente não vejo problemas entre o código e a documentação. Devemos nos lembrar que, assim como o Chat GPT, produzimos códigos para outros seres humanos. Por último, é importante manter a documentação atualizada.

O estilo de documentação é simplificado e sem muitas regras, deixando que você defina a sua forma de documentar os argumentos da função e seu retorno.

Bora para a prática!

Iremos criar uma simples aplicação http documentada que devolve dados de cartões aleatórios para o super hacker da geração, que usa a palavra "hack" para tudo, poder realizar transações e receber uma compra negada na sua cara.

Como gerar documentos do seu código em Go?
O super hacker

Dependências

Iremos instalar duas bibliotecas oficiais o godoc e o pkgsite, que permitem a conversão dos comentários para HTML.

go install -v golang.org/x/tools/cmd/godoc@latest
go install golang.org/x/pkgsite/cmd/pkgsite@latest

Códigos

  • go.mod
💬 O pacote gofakeit oferece diversas implementações para gerar informações aleatórias, o que torna a preparação de ambientes de teste e fixtures mais fácil.
module github.com/williampsena/go-recipes/doc-app-example

go 1.22.4

require github.com/brianvoe/gofakeit/v7 v7.0.3 // indirect

  • main.go
// This package represents the application command for starting a web server.
package main

import (
	"github.com/brianvoe/gofakeit/v7"
	"github.com/williampsena/go-recipes/doc-app-example/web"
)

// This function is responsible for setting up the program before it runs
func init() {
	gofakeit.Seed(0)
}

// Application entrypoint
func main() {
	svr := web.BuildServer()
	web.ListenAndServe(svr)
}
  • Makefile
SHELL=bash

dev:
	go run main.go

docs-godoc:
	godoc -http=:4444

docs-pkgsite:
	pkgsite -http=:4444
  • web/server.go
// This package contains web server structures and functions responsible for handling HTTP application.
package web

import (
	"fmt"
	"net/http"
)

// Create an application web server mux with routes established
func BuildServer() *http.ServeMux {
	mux := http.NewServeMux()

	mux.HandleFunc("GET /health", HealthCheckEndpoint)
	mux.HandleFunc("GET /cards", CardGeneratorEndpoint)

	return mux
}

// Listening web server on port 4000
func ListenAndServe(mux *http.ServeMux) {
	fmt.Println("✅ The sever is listening at port 4000")
	http.ListenAndServe("localhost:4000", mux)
}
  • web/cards.go
package web

import (
	"encoding/json"
	"fmt"
	"net/http"

	"github.com/brianvoe/gofakeit/v7"
)

// Struct responsible for holding all card data, such as the holder's name and card number
type Card struct {
	HolderName string `json:"holder_name"` // card holder name
	Type       string `json:"type"`        // card type (master, visa, amex)
	Number     string `json:"number"`      // card number
	Cvv        string `json:"cvv"`         // card verification code
	Expiration string `json:"exp"`         // the expiration year + month
}

// Create a Fake Card Struct
func BuildCard() (*Card, error) {
	creditCard := gofakeit.CreditCard()

	card := Card{
		HolderName: gofakeit.Name(),
		Type:       creditCard.Type,
		Number:     creditCard.Number,
		Cvv:        creditCard.Cvv,
		Expiration: creditCard.Exp,
	}

	return &card, nil
}

// Endpoint is responsible for responding to a false card generation
func CardGeneratorEndpoint(w http.ResponseWriter, r *http.Request) {
	card, err := BuildCard()

	if err != nil {
		w.WriteHeader(http.StatusInternalServerError)
		fmt.Fprint(w, "Sorry, something wrong happened!")
		return
	}

	w.Header().Set("Content-Type", "application/json")
	json.NewEncoder(w).Encode(card)
}

  • web/health.go
package web

import (
	"fmt"
	"net/http"
)

// Endpoint is responsible for responding to application health
func HealthCheckEndpoint(w http.ResponseWriter, r *http.Request) {
	fmt.Fprint(w, "I'm so healthy 😌!")
}
O propósito do artigo não é detalhar cada aspecto da implementação, mas sim examinar os comentários inseridos no código que serão traduzidos para a documentação HTML.

Vamos executar a aplicação super hacker 💀 que gera informações de cartões 💳.

make dev

# ou

go run main.go

A aplicação estará acessível na porta 4000, é possível executar os comandos a seguir para verificar suas rotas:

curl http://localhost:4000/health 

# I'm so healthy 😌!

curl http://localhost:4000/cards 

# {"holder_name":"Ephraim Hand","type":"American Express" "number":"6062825121549507","cvv":"857","exp":"08/25"}

Gerar documentação

Com a estrutura do projeto pronta, executaremos o godoc para verificar a documentação gerada.

make docs-godoc

# ou

godoc -http=:4444

A documentação está acessível na porta 4444.

Como gerar documentos do seu código em Go?

Bom, como vocês podem ver no vídeo acima, a navegação não ficou tão objetiva como gostaríamos, certo?

Assim, se um pacote não atende aos gostos populares, essa nova geração cria um novo, correto (vide NPM packages)? O pkgsite possui uma documentação mais estruturada, com uma melhor experiência de navegação do que a do godoc, utilizada como frontend dos pacotes Go. Encontrei uma referências que indica um movimento de "deprecated" do godoc em favor do uso do pkgsite.

Sem mais delongas! Vamos agora ao pkgsite, nada precisa ser ajustado somente instalar e executar o pacote.

É hora de visualizar uma documentação com uma melhor experiência de navegação.

make docs-pkgsite

# ou

pkgsite -http=:4444

A documentação está acessível na porta 4444.

Como gerar documentos do seu código em Go?

Como podemos perceber, a diferença começa nas cores do tema. Muitas pessoas amam o dark-mode, confesso que gosto de algumas aplicações no light, não me julgem. A leitura do código no modo (raw) facilita o nosso copy/paste 😜. Fato o pkgsite tem uma experiência de navegação superior.

Repositório

A implementação está disponível em meu repositório do Github:

go-recipes/doc-app-example at main · williampsena/go-recipes
Contribute to williampsena/go-recipes development by creating an account on GitHub.
Como gerar documentos do seu código em Go?

Fim!

Por hoje é só, espero que o conteúdo possa aprimoar suas experiências em Go, mantenham o 🧠 kernel atualizado, com tudo que seja respeitável, justo, puro e de boa fama, se há alguma virtude ou louvor que isso habite em nossos pensamentos! 🕊️ Filipenses 4:8

Referências

]]>
<![CDATA[Running the Traefik, my favorite Edge Router with Podman]]>https://willsena.dev/running-the-traefik-my-favorite-cloud-edge-router-with-podman/6612e82c4849f2000abf44edSun, 07 Apr 2024 21:30:00 GMT

Today I'm going to show you how to use Traefik locally with Podman, my favorite Edge Router, to publish services with route matching, authentication, and other middlewares in an outstanding way.

Requirements

We require Podman to run containers locally; if you want an introduction, I wrote the following articles:

How to Run Secure Pods with Podman
Podman is a Red Hat container engine that allows users to manage containerized applications and their resources. Operates without a daemon.
Running the Traefik, my favorite Edge Router with Podman
Building Kubernetes-style pods with Podman
In a recent piece, I discussed Podman, a wonderful Red Hat-powered project that provides a container alternative supported by Kubernetes and a replacement for Docker, read more at the following link. How to Run Secure Pods with PodmanPodman is a Red Hat container engine that allows users to manage…
Running the Traefik, my favorite Edge Router with Podman

Podman SystemD Socket

Start the Podman systemd socket, as Traefik requires it to handle containers:

systemctl --user start podman.socket

If you prefer, you can set it to start immediately on boot:

systemctl --user enable podman.socket

What is Traefik?

Traefik is a modern HTTP reverse proxy and load balancer developed in Go that is suited for microservice architecture. It is commonly used in containerized environments, such as Docker and Kubernetes.

There are no official Podman docs on how to make things work with Podman, although a few months ago I saw some examples using Podman Socket, similar to how Traefik works with Docker.

Traefik dynamically detects services as they are introduced to the infrastructure and routes traffic to them, making applications easier to manage and grow.

Major features:

  • Automatic Service Discovery: Traefik can detect new services as they are introduced to your infrastructure, removing the need for human configuration.
  • Dynamic Configuration: It can reorganize itself as services scale up or down, making it ideal for dynamic contexts like as container orchestration platforms.
  • Load Balancing: Traefik includes built-in load balancing capabilities for distributing incoming traffic over many instances of a service.
  • Automatic TLS: It may supply TLS certificates from Let's Encrypt, enabling HTTPS by default without requiring manual configuration.
  • Dashboard: Traefik includes a web dashboard and a RESTful API, which enable operators to monitor and manage traffic routing and configuration.
  • Middleware Support: It supports a number of middleware plugins for features like authentication, rate limiting, and request rewriting.
  • Multiple Backends: Traefik can route traffic to multiple backend services based on various criteria like path, headers, or domain names.

Goals

The purpose is creating an example of using Podman Kube, a Kubernetes Deployment style to run pods. Traefik has a defined deployment schema. This article will introduce a way for annotating containers with labels.

Traefik communicates directly with Docker or Podman socket to listen for container creations and define routes and middlewares for them.

Please show the code that is working!

Running the Traefik, my favorite Edge Router with Podman

Deployments

  • traefik.yaml

This file demonstrates a Traefik pod deployment that listens on ports 8000 and 8001.

apiVersion: v1
kind: Pod
metadata:
  name: traefik
  labels:
    app: traefik
spec:
  containers:
  - name: traefik
    image: docker.io/library/traefik:v3.0
    args:
    - '--accesslog=true'
    - '--api.dashboard=true'
    - '--api.insecure=true'
    - '--entrypoints.http.address=:8000'
    - '--log.level=info'
    - '--providers.docker=true'
    volumeMounts:
    - mountPath: /var/run/docker.sock:z
      name: docker_sock
    ports:
    - containerPort: 8000
      hostPort: 8000
      protocol: TCP
    - containerPort: 8080
      hostPort: 8001
      protocol: TCP
  restartPolicy: Never
  dnsPolicy: Default
  volumes:
  - name: docker_sock
    hostPath:
      path: "/run/user/1000/podman/podman.sock"
      type: File
Please check the location of your podman.sock, the default user is 1000, and the sock is typically found in /run/user/1000/podman/podman.sock.
  • whoami.yaml

This file shows a replica of a simple HTTP container that returns container-specific information such as IP and host name for debugging.

Traefik uses container labels or annotations to define rules.

  • traefik.http.routers.whoami.rule: specifies match rules for reaching the container, which can be host, header, path, or a combination of these.
  • traefik.http.services.whoami.loadbalancer.server.port: specifies the port on which the container is listening.
apiVersion: v1
kind: Pod
metadata:
  name: whoami
  labels:
    traefik.http.routers.whoami.rule: Host(`whoami.localhost`)
    traefik.http.services.whoami.loadbalancer.server.port: 3000
spec:
  containers:
  - name: whoami
    image: docker.io/traefik/whoami:latest
    ports:
    - containerPort: 3000
      protocol: TCP
    env:
    - name: WHOAMI_PORT_NUMBER
      value: 3000
  restartPolicy: Never
  dnsPolicy: Default
🫠Unfortunately, replicas are not supported. If we had replicas, Traefik would handle them using round-robin to reach each container, as Traefik works with Docker Swarm and Kubernetes.
  • whoami-secure.yaml

This file describes the same service but includes the Basic Auth middleware to demonstrate how to utilize middlewares.

  • traefik.http.routers.{route-name}.middlewares: specifies the middlewares utilized in the current container.
  • traefik.http.middlewares.{middleware-name}.basicauth.users: specifies the user and passwords.

You can generate htpassword with the following command:

docker run --rm -ti xmartlabs/htpasswd <username> <password> > htpasswd
apiVersion: v1
kind: Pod
metadata:
  name: whoami-secure
  labels:
    traefik.http.routers.whoami-secure.rule: Host(`whoami-secure.localhost`)
    traefik.http.services.whoami-secure.loadbalancer.server.port: 3000
    traefik.http.routers.whoami-secure.middlewares: auth
    traefik.http.middlewares.auth.basicauth.users: foo:$2y$05$.y24r9IFaJiODuv41ool7uLyYdc4H4pDZ5dSKkL.Z/tUg3K3NancS
spec:
  containers:
  - name: whoami-secure
    image: docker.io/traefik/whoami:latest
    ports:
    - containerPort: 3000
      protocol: TCP
    env:
    - name: WHOAMI_PORT_NUMBER
      value: 3000
  restartPolicy: Never
  dnsPolicy: Default
It is critical to note that only Traefik exposes a port to hosting; Traefik centralizes all traffic, proxying each request dealing with IP and listening port from each container.

Running

podman play kube pods/traefik/traefik.yaml
podman play kube pods/traefik/whoami.yaml      
podman play kube pods/traefik/whoami-secure.yaml

Testing

You can view Traefik Dashboard at port 8001, which displays importing information about Routes and Containers.

Running the Traefik, my favorite Edge Router with Podman

Let we test the Whoami route at endpoint http://whoami.localhost:8000/:

Running the Traefik, my favorite Edge Router with Podman

We can now check the Whoami route using basic authentication with the username "foo" and password "bar" at http://whoami-secure.localhost:8000/

Running the Traefik, my favorite Edge Router with Podman

Troubleshooting

If hosts is not resolving you may need to add to /etc/hosts.

127.0.0.1  localhost whoami.localhost whoami-secure.localhost

Code

💡 Feel free to clone this repository, which contains related files:

GitHub - williampsena/podman-recipes: This repository contains Podman examples such as network, volumes, environment variables, and other features.
This repository contains Podman examples such as network, volumes, environment variables, and other features. - williampsena/podman-recipes
Running the Traefik, my favorite Edge Router with Podman

Tearing down

podman play kube --down pods/traefik/traefik.yaml
podman play kube --down pods/traefik/whoami.yaml      
podman play kube --down pods/traefik/whoami-secure.yaml

That's it

In this post, we will demonstrate how Traefik works, how to build settings to reach containers, and how to use middleware to use the full capability of container orchestration. I recommend that you look into Traefik middlewares; they can be more beneficial than an API Gateway at times.

Please keep your kernel 🧠 updated God bless 🕊️ you. I'll share a quote:

Whatever you do, work at it with all your heart, as working for the Lord, not for human masters. Colossians 3:23

References

]]>
<![CDATA[Building and deploying AWS Lambda with Serverless framework in just a few of minutes - Part 2]]>https://willsena.dev/building-and-deploying-aws-lambda-with-serverless-framework-in-just-a-few-of-minutes-part-2/65dbc827154f19000a3fa840Mon, 26 Feb 2024 01:15:42 GMT

In this article, I'll show you how to deploy the lambda that we constructed in the previous section. This time, we need to set up AWS in preparation for deployment.

If you missed Part 1, please read it first.
Building and deploying AWS Lambda with Serverless framework in just a few of minutes
How to create an AWS Lambda using a Serverless framework, as well as how to structure and manage your functions as projects in your repository.
Building and deploying AWS Lambda with Serverless framework in just a few of minutes - Part 2

No costs!

To test deployments, there is no cost; Amazon has a free tier for lambdas, so you can deploy as many lambdas as you want; you will pay once you exceed the following limits:

  • 1 million free requests per month
  • 3.2 million seconds of compute time per month
⚡ So be careful to write any lambdas that involve image or video processing, or that run for a long duration of time, because you'll most likely pay for it, and keep in mind that there is a 900-second execution limit (15 minutes).
Building and deploying AWS Lambda with Serverless framework in just a few of minutes - Part 2

Requirements

AWS Access Key

After you've created your account, you'll need to create a user and set your AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEYPerhaps you are receiving a privilege-related problem; check to see if you have missed any policies for your user., and AWS_DEFAULT_REGION environment variables.

You can set these values in your profile (.bashrc, .zshrc, .profile, or bash_profile).

export AWS_ACCESS_KEY_ID="YOUR_ACCESS_KEY_ID"
export AWS_SECRET_ACCESS_KEY="YOUR_ACCESS_KEY"
export AWS_DEFAULT_REGION="YOUR DEFAULT REGION or us-east-1"

Group and privileges

Now we need to assign some privileges to your user, who will be in charge of deploying AWS Lambda through Serverless, Create a group, attach it to your created user, and provide the following permissions:

  • AmazonAPIGatewayAdministrator
  • AWSCloudFormationFullAccess
  • AWSCodeDeployFullAccess
  • AWSCodeDeployRoleForLambda
  • AWSLambdaBasicExecutionRole
  • AWSLambdaFullAccess

Role for AWS Lambda execution

If you did not specify an iam/role before deployment, Serverless will manage a user for you if the user has permission to create roles. In this example, I tried not to use this magic; in my opinion, allowing a tool to set your lambda permissions is not a good idea...

Building and deploying AWS Lambda with Serverless framework in just a few of minutes - Part 2

Then let's create some roles; we can ask Amazon help to create a Lambda-specific user, as I did below:

Building and deploying AWS Lambda with Serverless framework in just a few of minutes - Part 2

After we create this role, you must copy the ARN and specify it at deployment.

Building and deploying AWS Lambda with Serverless framework in just a few of minutes - Part 2

This is fantastic, however we construct a Lambda user who has access to any AWS resource, thus it's an excellent security question for production scenarios. I recommend that you create something specifically for your lambda. It's hard but safe.


service: service-currencies
frameworkVersion: "3"

provider:
  name: aws
  runtime: nodejs18.x
  iam:
    role: arn:aws:iam::12345678:role/AWSLambda

functions:
  api:
    handler: handler.listCurrencies
    events:
      - httpApi:
          path: /
          method: get
plugins:
  - serverless-plugin-typescript
  - serverless-offline

package:
  patterns:
    - '!node_modules/**'
    - 'node_modules/node-fetch/**'

The latest lines in the file refer to skipping node_modules during package deployment.

Deploy

If everything is set up correctly, the deployment will be successful. As an ordinary Friday deployment. 😜

SLS_DEBUG=* sls deploy --verbose
Building and deploying AWS Lambda with Serverless framework in just a few of minutes - Part 2

That's it; your public service has been deployed, and you may test the ApiUrl exposed after deployment.

Issues

  • Perhaps you are receiving a privilege-related problem; check to see if you have missed any policies for your user.
  • Remember to configure your AWS keys in your profile or via a shell session if you want.

Removing

To avoid spending money 💸 on a public test route, use the following command to remove your lambda function.

SLS_DEBUG=* sls remove

That's all

Thank you for your attention, I hope that my piece has helped you understand something about Lambda and has encouraged you to learn more about not only AWS but also Cloud.

Please keep your kernel 🧠 updated. God gives us blessings 🕊️.

References

]]>
<![CDATA[Building and deploying AWS Lambda with Serverless framework in just a few of minutes]]>https://willsena.dev/building-and-deploying-aws-lambda-with-serverless-framework-in-just-a-few-of-minutes/63d6ba543711560001c9735eMon, 19 Feb 2024 00:00:00 GMT

Today I'll teach you how to create an AWS Lambda using a Serverless framework, as well as how to structure and manage your functions as projects in your repository. Serverless provides an interface for AWS settings that you may configure in your deployment configurations and function permissions for any service, including S3, SNS, SQS, Kinesis, DynamoDB, Secret Manager, and others.

AWS Lambda

Is a serverless computing solution offered by Amazon Web Services. It lets you run code without having to provision or manage servers. With Lambda, you can upload your code as functions, and AWS will install, scale, and manage the infrastructure required to perform those functions.

AWS Lambda supports a variety of programming languages, including:

  • Node.js
  • Python
  • Java
  • Go
  • Ruby
  • Rust
  • .NET
  • PowerShell
  • Custom Runtime, such as Docker container

First things first

Building and deploying AWS Lambda with Serverless framework in just a few of minutes
First, you should set up your Node.js environment; I recommend using nvm for this.

The serverless CLI must now be installed as a global npm package.

# (npm) install serverless as global package
npm install -g serverless

# (yarn)
yarn global add serverless

Generating the project structure

Following command will create a Node.js AWS lambda template.

serverless create --template aws-nodejs --path hello-world

Serverless Offline and Typescript support

Let's add some packages to the project.

npm install -D serverless-plugin-typescript typescript serverless-offline

# yarn

yarn add -D serverless-plugin-typescript typescript serverless-offline

# pnpm

pnpm install -D serverless-plugin-typescript typescript serverless-offline

Show the code

Building and deploying AWS Lambda with Serverless framework in just a few of minutes
If you prefer, you can clone the repository.
  • hello_world/selector.ts

This file includes the function that converts external data to API contracts.

import { CurrencyResponse } from './crawler'

export type Currency = {
  name: string
  code: string
  bid: number
  ask: number
}

export const selectCurrencies = (response: CurrencyResponse) =>
  Object.values(response).map(
    currency =>
      ({
        name: currency.name,
        code: currency.code,
        bid: parseFloat(currency.bid),
        ask: parseFloat(currency.ask),
      } as Currency)
  )

export default {
  selectCurrencies,
}
  • hello_world/crawler.ts

This file contains the main function, which retrieves data from a JSON API using currency values.

export type CurrencySourceData = {
  code: string
  codein: string
  name: string
  high: string
  low: string
  varBid: string
  pctChange: string
  bid: string
  ask: string
  timestamp: string
  create_date: string
}

export type CurrencyResponse = Record<string, CurrencySourceData>

export const apiUrl = 'https://economia.awesomeapi.com.br'

export async function getCurrencies(currency) {
  const response = await fetch(`${apiUrl}/last/${currency}`)

  if (response.status != 200)
    throw Error('Error while trying to get currencies from external API')

  return (await response.json()) as CurrencyResponse
}

export default {
  apiUrl,
  getCurrencies,
}
  • hello_world/handler.ts

Now we have a file containing a function that acts as an entrypoint for AWS Lambda.


import { getCurrencies } from './crawler'
import { selectCurrencies } from './selector'
import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda'

const DEFAULT_CURRENCY = 'USD-BRL,EUR-BRL,BTC-BRL' as const

export async function listCurrencies(
  event: APIGatewayProxyEvent
): Promise {
  try {
    const currency = event.queryStringParameters?.currency || DEFAULT_CURRENCY
    const currencies = selectCurrencies(await getCurrencies(currency))

    return {
      statusCode: 200,
      body: JSON.stringify(currencies, null, 2),
    }
  } catch (e) {
    console.error(e.toString())

    return {
      statusCode: 500,
      body: '🫡 Something bad happened',
    }
  }
}

export default {
  listCurrencies,
}
💡The highlight lines indicate that if we had more than one function on the same project, we could wrap promises to centralize error handling.
  • hello_world/serverless.yml

This file explains how this set of code will run on AWS servers.

service: service-currencies
frameworkVersion: "3"

provider:
  name: aws
  runtime: nodejs18.x

functions:
  api:
    handler: handler.listCurrencies
    events:
      - httpApi:
          path: /
          method: get
plugins:
  - serverless-plugin-typescript
  - serverless-offline
  • hello_world/tsconfig.json

The Typescript settings.

{
  "compilerOptions": {
    "preserveConstEnums": true,
    "strictNullChecks": true,
    "sourceMap": true,
    "allowJs": true,
    "target": "es5",
    "outDir": "dist",
    "moduleResolution": "node",
    "lib": ["es2015"],
    "rootDir": "./"
  }
}

Execution

Let's test the serverless execution with following command:

SLS_DEBUG=* serverless offline

# or

SLS_DEBUG=* sls offline

You can look at the API response at http://localhost:3000.

Building and deploying AWS Lambda with Serverless framework in just a few of minutes

We can run lambda locally without the Serverless offline plugin and get the result in the shell:

sls invoke local -f api

Tests

I use Jest to improve test coverage and illustrate how to use this wonderful approach, which is often discussed but not frequently utilized but should be 😏. I'm not here to claim full coverage, but some coverage is required.

  • hello_world/__tests__ /handler.spec.ts
import {
  APIGatewayProxyEvent,
  APIGatewayProxyEventQueryStringParameters,
} from 'aws-lambda'
import { listCurrencies } from '../handler'
import fetchMock = require('fetch-mock')
import { getFixture } from './support/fixtures'

describe('given listen currencies http request', function () {
  beforeEach(() => fetchMock.restore())

  it('should raise error when Currency param is empty', async function () {
    fetchMock.mock(/\/last\//, { status: 404, body: '' })

    const event = { queryStringParameters: {} } as APIGatewayProxyEvent

    const result = await listCurrencies(event)

    expect(result).toEqual({
      body: '🫡 Something bad happened',
      statusCode: 500,
    })
  })

  it('should return currency list', async function () {
    fetchMock.mock(/\/last\//, {
      status: 200,
      body: getFixture('list_currencies_ok.json'),
    })

    const event = {
      queryStringParameters: {
        currency: 'USD-BRL,EUR-BRL,BTC-BRL',
      } as APIGatewayProxyEventQueryStringParameters,
    } as APIGatewayProxyEvent

    const result = await listCurrencies(event)
    expect(result.statusCode).toBe(200)
    expect(JSON.parse(result.body)).toEqual([])
  })
})

A lot of code will be required to run tests; take a look at the repository and then type:

npm test

Extra pipeline

Pipeline GitHub actions with tests, linter (eslint) and checker:

name: build

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  build:
    runs-on: ubuntu-latest
    defaults:
      run:
        working-directory: 'hello-world'

    steps:
      - uses: actions/checkout@v3
      - uses: pnpm/action-setup@v3
        with:
          version: 8
      - name: Use Node.js ${{ matrix.node-version }}
        uses: actions/setup-node@v3
        with:
          node-version: '18.x'
          cache: 'pnpm'
          cache-dependency-path: ./hello-world/pnpm-lock.yaml

      - name: Install dependencies
        run: pnpm install

      - name: Run ci
        run: npm run test && npm run lint && npm run check

Final Thoughts

In this post, we discussed how to setup our serverless function in the development context, execute and test it before moving it to the production environment, as it should be. So that covers up the first phase; I'll publish a second blog describing how to move our local function into production and deploy it in an AWS environment.

Thank you for your time, and please keep your kernel 🧠 updated to the most recent version. God brings us blessings 🕊️.

Part 2...

Building and deploying AWS Lambda with Serverless framework in just a few of minutes - Part 2
I will show you how to deploy the lambda that we constructed in the previous section. So, we need to set up AWS in preparation for deployment.
Building and deploying AWS Lambda with Serverless framework in just a few of minutes
]]>
<![CDATA[Por onde eu começo! Um café funcional com Elixir]]>https://willsena.dev/por-onde-eu-comeco-um-cafe-funcional-com-elixir/64dc0a5ff4729a000964e055Tue, 16 Jan 2024 23:00:00 GMT

Há alguns anos, troquei o conforto da linguagem (C#) e o contexto (Microsoft) com os quais trabalhei por longos anos por um desafio Rails, Elixir e outras linguagens.

Eu estava em busca dessa virada na carreira para agregar outros sabores à minha experiência. Admito que este desafio otimizou a minha forma de pensar e como utilizar as tecnologias disponíveis. Como um usuário Linux de longa data e também um programador de linguagens variadas como C#, Node.js, Java, Python e Rails, eu necessitava desta mudança. Eu passava ao menos 8 horas por dia no Windows trabalhando focado no Visual Studio e um pouco no SQL Server, o que não é ruim, mas na minha opinião eu precisava dessa generalização além da experiência comprovada no meu dia a dia.

Mesmo eu desenvolvendo projetos pessoais com diferentes tecnologias, eu ainda sentia falta dessa experiência diária, de trocar figurinhas e melhorar a cada dia. As inovações de hoje acontecem em uma frequência diferenciada, e não datadas como de costume.

Para se ter uma ideia, atualizações das linguagens de back-end eram de 1 a 3 anos, no front-end, houve demora dos navegadores para implementar a cobertura do ECMAScript 2015 (ES6), havia poucos bancos diferentes, resumidamente esse era a frequência de atualização...

Para você...

Se você sente essa vontade de descobrir algo fora do seu contexto, este artigo é para você que já trabalhou muito com linguagens orientadas à objetos e quer entender um pouco sobre a linguagem funcional Elixir, e por onde começar...

Essa linguagem melhorou a maneira como eu programo. Eu era daqueles programadores que aplicava paradigmas e padrões até no Hello World, sabe? Porque no livro Guru, alguém disse que esta é a única maneira de resolver este problema. Em suma, havia uma certa falta de maturidade.

Por onde eu começo! Um café funcional com Elixir

Minha primeira impressão do Elixir foi incrível, módulos, funções, funções puras, sem efeitos ou surpresas, e também não puras que acessam arquivos, bancos de dados ou serviços.

O que quero ressaltar é que minha primeira exposição ao Elixir foi bem diferente, pois nunca havia trabalhado com linguagens funcionais como Haskell, Erlang, OCaml, Scala, F# e Clojure, apenas tinha visto ou ouvido falar sobre isso 😆.

É claro que para quem já trabalhou com alguma dessas linguagens, que possuem muitos conceitos e princípios a exposição e opinião podem ser diferentes, devemos aplaudir os esforços do Elixir em fornecer uma ampla gama de recursos de linguagem.

A estrutura da linguagem ajuda a manter o código limpo e elegante, além de todos os recursos poderosos do BEAM (Erlang VM), ele suporta o desenvolvimento de grandes aplicações, um exemplo de aplicação em Erlang é nosso querido RabbitMQ muito conhecido por nós desenvolvedores e outro case conhecido por todos nós é o WhatsApp.

Abaixo há uma lista de cases do Elixir:

  • Discord
  • Heroku
  • Pepsico

O que é Elixir?

É uma linguagem de programação funcional de uso geral que roda na Máquina Virtual Erlang (BEAM). Compilando sobre o Erlang, o Elixir fornece aplicativos distribuídos e tolerantes a falhas, que utilizam os recursos de CPU e memória de forma otimizada. Também fornece recursos de meta programação como macros e polimorfismo por protocolos.

Um ponto importante a linguagem foi criada pelo brasileiro 🇧🇷 José Valim.

Elixir tem semelhança sintática ao Ruby e também aproveita recursos como o popular doctest e List Comprehension do Python. Podemos dizer que inspirações trouxeram as melhores práticas de programação para a linguagem.

A linguagem possui tipagem dinâmica, o que significa que todos os tipos são verificados em tempo de execução, assim como o Ruby e JavaScript.

Podemos "tipar" algumas coisas usando Typespecs e Dialyzer para validação de inconsistências, porém isso não interfere na compilação...

Instalando o Elixir

O Elixir possui gerenciador de versões chamado Kiex, mas para rodar o Elixir precisamos de uma máquina virtual Erlang gerenciada pelo Kerl. Muitos instaladores não são uma boa maneira de começar, ok?

Eu recomendo usar o ASDF que tem dois plugins para Elixir e Erlang, e também suporta o arquivo .tools-version onde você especifica qual versão da máquina virtual Erlang sua aplicação está usando (OTP) e qual versão do Elixir.

Escrevi um artigo sobre ASDF + Elixir, então recomendo que você instale a partir deste artigo:

Using ASDF to Manage Programming Language Runtime Versions
ASDF is a command-line tool that allows you to manage multiple language runtime versions, useful for developers who use a runtime version list
Por onde eu começo! Um café funcional com Elixir

Read Eval Print Loop (REPL)

O melhor lugar para conhecer, aprender e testar uma linguagem é o REPL. Iremos agora testar os conceitos da linguagem antes de adicionar de qualquer tipo de protótipo do projeto.

Tipos de dados

Abaixo temos os tipos de dados existentes no Elixir.

# Inteiros
numero = 1
# Pontos flutuantes
flutuante = 3.14159265358979323846

# Booleanos
verdadeiro = true
falso = false

# Átomos
atom = :sou_atom

# Strings
string = "texto"

# Map
map = %{ "name" => "Foo", "action" => "bar" }
map_atom = %{ name: "Foo", action: :bar }
[map["action"], map_atom.action]
# ["bar", :bar]

# Keyword list
keyword_list = [name: "Bar", action: :foo]
action = keyword_list[:action]

# Lista
list = [1, 2, 3]

Valores Imutáveis

Imutabilidade é recurso também presente em linguagens orientadas como (C# e Java), no Elixir é um recurso é nativo. A programação funcional estabelece que variáveis não podem ser modificadas após sua inicialização, com o simples propósito de evitar efeitos que afetem o resultado, resumidamente fn (1+1) = 2.

Se ocorrer uma nova atribuição de valor, uma nova variável será criada. Resumindo não temos referência para essa variável, vou dar um exemplo de referência usando JavaScript e como seria no Elixir.

No JavaScript...

var messages = ["hello"]

function pushMessageWithEffect(list, message) {
    list.push(message)

    return list
}

function pushMessage(list, message) {
    return list.concat(message)
}

const nextMessage = pushMessage(messages, "world")
console.log(messages, nextMessage)
// [ 'hello' ] [ 'hello', 'world' ]

const nextMessage2 = pushMessageWithEffect(messages, "galaxy")
console.log(messages, nextMessage2)
// [ 'hello', 'galaxy' ] [ 'hello', 'galaxy' ]

No Elixir...

defmodule Messages do
    def push_message(list, message) do
        list ++ [message]
    end
end

messages = ["hello"]
next_message = Messages.push_message(messages, "world")
{ messages, next_message }
# {["hello"], ["hello", "world"]}

No Elixir temos que lidar com módulos e funções, não existem classes então valores não herdam comportamento, como por exemplo em Java cada Objeto recebe um toString() que pode ser sobrescrito para traduzir uma classe em um String, assim como no C# Object.ToString().

Desta forma seria impossível pegar a lista e chamar uma método que a modifique, precisamos gerar uma nova lista para operações em lista e mapas o Elixir possui um módulo Enum que possui muitas implementações como Map, Reduce, filtros, concatenação e outros recursos.

Funções

As funções são responsáveis pelos comportamentos que definem um programa. Essas podem ser puras ou impuras.

Pure Functions (puras)

  • Trabalha com valores imutáveis;
  • O resultado da função é definido com base nos seus argumentos explícitos, nada de magias 🧙‍♂️;
  • A execução desta função não tem efeito colateral;

Impure Functions (impuras)

Funções impuras podemos definir como complexas, estas podem depender de recursos externos ou executar processos que impactam diretamente no seu resultado, como os exemplos abaixo:

  • Escrita em arquivos ou banco de dados;
  • Publicação de mensagem em filas;
  • Requisições HTTP;

Esses tipos de recursos externos, além de não garantirem que a resposta seja a mesma, também podem apresentar instabilidade, causando erros, algo que uma função pode não esperar, sendo assim um efeito colateral ou "impureza".

Certa vez ouvi em uma apresentação em F# que em C# é comum que nossos métodos (funções) sejam impuros, podendo retornar um resultado do tipo esperado ou simplesmente interromper o fluxo lançando uma exceção. Faz todo o sentido houve um direcionamento dos frameworks, onde começamos a criar exceções relacionadas ao negócio, sendo assim, passamos a usar exceções como desvios de bloco de código, ou seja, o GOTO da estruturada na orientação à objetos 🤦.

Abaixo temos um exemplo prático de uma função de soma pura e soma global que utiliza o módulo Agent para manter o estado o que causará o efeito.

defmodule Functions do
  use Agent
  
  def start_link(initial_value) do
    Agent.start_link(fn -> initial_value end, name: __MODULE__)
  end

  def state do
    Agent.get(__MODULE__, & &1)
  end
  
  def update_state(value) do
    Agent.update(__MODULE__, &(&1 + value))
  end

  def sum(x, y) do
    x + y
  end
  
  def global_sum(x, y) do
    update_state(x + y)
    state()
  end
end

Functions.start_link(0)
# Inicia processo para manter o estado com valor inicial 0

Functions.sum(1, 1)
# 2

Functions.sum(1, 1)
# 2

Functions.global_sum(1, 1)
# 2

Functions.global_sum(2, 3)
# 7

O uso do módulo Agent facilita na clareza de que o método em questão tem efeitos colaterais.

Concorrência, Processos e Tolerância a falhas

Como já foi dito o Elixir lida com processos de forma bem otimizada visando CPU cores, graças também a execução na máquina virtual do Erlang (BEAM).

O Elixir possui uma implementação de processo em segundo plano chamada GenServer/GenStage. Suponha que você queira criar um processo que receba estímulo de uma fila ou um processo agendado que envie uma solicitação HTTP.

Você pode dimensionar o processo para executar (N) GenServer/GenStage, além disso, existe um Supervisor, que é um processo especial que tem uma finalidade de monitorar outros processos.

Esses supervisores permitem criar aplicativos tolerantes a falhas, reiniciando automaticamente processos filhos quando eles falham.

Este tema pode ser considerado o principal do Elixir.

Abaixo está um trecho de código para configurar o supervisor da aplicação de um dos meus projetos em desenvolvimento o BugsChannel.

def start(_type, _args) do
    children =
      [
        {BugsChannel.Cache, []},
        {Bandit, plug: BugsChannel.Api.Router, port: server_port()}
      ] ++
        Applications.Settings.start(database_mode()) ++
        Applications.Sentry.start() ++
        Applications.Gnat.start() ++
        Applications.Channels.start() ++
        Applications.Mongo.start(database_mode()) ++
        Applications.Redis.start(event_target())

    opts = [strategy: :one_for_one, name: BugsChannel.Supervisor]

    Logger.info("🐛 Starting application...")

    Supervisor.start_link(children, opts)
end

Neste caso o supervisor é responsável por vários processos, como filas, bancos de dados e outros processos.

Macros

Há uma afirmação clara na documentação do Elixir sobre macros "As macros só devem ser usadas como último recurso. Lembre-se que explícito é melhor do que implícito. Código claro é melhor do que código conciso." ❤️

Macros podem ser consideradas mágicas 🎩, assim como no RPG, toda magia tem um preço 🎲. Usamos para compartilhar comportamentos, abaixo temos um exemplo básico do que podemos fazer, simulando herança, utilizando um módulo como classe base.

defmodule Publisher do
  defmacro __using__(_opts) do
    quote do
      def send(queue, message) do
        :queue.in(message, queue)
      end

      defoverridable send: 2
    end
  end
end

defmodule Greeter do
  use Publisher

  def send(queue, name) do
    super(queue, "Hello #{name}")
  end
end

queue = :queue.from_list([])

queue = Greeter.send(queue, "world")

:queue.to_list(queue)
# ["Hello world"]

O módulo Publisher define uma função chamada send/2. Esta função é reescrita pelo módulo Greeter para adicionar padrões às mensagens, semelhante às substituições de métodos de classe (overrides).

Para maior clareza, este exemplo podemos implementar sem herança, usando composição do módulo ou apenas o modulo diretamente. Por esta razão, as macros devem ser sempre avaliadas como último recurso.

defmodule Publisher do
  def send(queue, message) do
    :queue.in(message, queue)
  end
end

defmodule Greeter do
  def send(queue, name) do
    Publisher.send(queue, "Hello #{name}")
  end
end

queue = :queue.from_list([])

queue = Greeter.send(queue, "world")

:queue.to_list(queue)
# ["Hello world"]

Além do use, existem outras diretivas definidas pelo Elixir para reuso de funções (alias, import, require), exemplos de uso:

defmodule Math.CrazyMath do
  def sum_pow(x, y), do: (x + y) + (x ** y)
end

defmodule AppAlias do
  alias Math.CrazyMath
  
  def calc(x, y) do
    "The sum pow is #{CrazyMath.sum_pow(x, y)}"
  end
end

defmodule AppImport do
  import Math.CrazyMath
  
  def calc(x, y) do
    "The sum pow is #{sum_pow(x, y)}"
  end
end

defmodule AppRequire do
  defmacro calc(x, y) do
    "The sum pow is #{Math.CrazyMath.sum_pow(x, y)}"
  end
end

AppAlias.calc(2, 2)
# "The sum pow is 8"

AppImport.calc(2, 2)
# "The sum pow is 8"

AppRequire.calc(2, 2)
# function AppRequire.calc/2 is undefined or private. 
# However, there is a macro with the same name and arity. 
# Be sure to require AppRequire if you intend to invoke this macro

require AppRequire
AppRequire.calc(2, 2)
# "The sum pow is 8"

Pattern Matching

A sobrecarga de métodos nas linguagens é relacionada ao número de argumentos e seus tipo de dados, que definem uma assinatura que auxiliam o código compilado a identificar qual método deve ser invocado, já que possuem o mesmo nome porém assinaturas diferentes.

No Elixir há Pattern Matching em todos os lugares, desde a sobrecarga de uma função as condições, esse comportamento da linguagem é sensacional, devemos prestar atenção à estrutura e comportamento.

defmodule Greeter do
  def send_message(%{ "message" => message }), do: do_message(message)
  
  def send_message(%{ message: message }), do: do_message(message)
  
  def send_message(message: message), do: do_message(message)
  
  def send_message(message) when is_binary(message), do: do_message(message)
  
  def send_message(message), do: "Invalid message #{inspect(message)}"
  
  def send_hello_message(message) when is_binary(message), do: do_message(message, "hello")
  
  def do_message(message, prefix \\ nil) do
    if is_nil(prefix),
      do: message,
      else: "#{prefix} #{message}"
  end
end

Greeter.send_message("hello world string")
# "hello world string"
Greeter.send_message(message: "hello keyword list")
# "hello keyword list"
Greeter.send_message(%{ "message" => "hello map", "args" => "ok" })
# "hello map"
Greeter.send_message(%{ message: "hello atom map", args: "ok" })
# "hello atom map"
Greeter.send_hello_message("with prefix")
"hello with prefix"

some_var = {:ok, "success"}
{:ok, message} = some_var

Condicional

Podemos criar condições com estruturas conhecidas como if e case, existe também cond que permite validar múltiplas condições de forma organizada e elegante.

defmodule Greeter do
  def say(:if, name, lang) do
    if lang == "pt" do
      "Olá #{name}"
    else
      if lang == "es" do
        "Hola #{name}"
      else
        if lang == "en" do
          "Hello #{name}"
        else
          "👋"
        end
      end
    end
  end

  def say(:cond, name, lang) do
    cond do
      lang == "pt" -> "Olá #{name}"
      lang == "es" -> "Hola #{name}"
      lang == "en" -> "Hello #{name}"
      true -> "👋"
    end
  end
  
  def say(:case, name, lang) do
    case lang do
      "pt" -> "Olá #{name}"
      "es" -> "Hola #{name}"
      "en" -> "Hello #{name}"
      _ -> "👋"
    end
  end
end

langs = ["pt", "en", "es", "xx"]

Enum.map(langs, fn lang -> Greeter.say(:if, "world", lang)  end)
# ["Olá world", "Hello world", "Hola world", "👋"]

Enum.map(langs, & Greeter.say(:case, "world", &1))
# ["Olá world", "Hello world", "Hola world", "👋"]

Enum.map(~w(pt en es xx), & Greeter.say(:cond, "world", &1))
# ["Olá world", "Hello world", "Hola world", "👋"]

Aqui estão algumas considerações da implementação para fornecer clareza adicional.

  • Podemos perceber que o if não é vantajoso e causa o efeito hadouken, devido a falta do "else if", esse recurso não existe em Elixir e creio que seja proposital, pois temos outras formas de lidar com essas condições, usando case ou cond, ainda há a possibilidade usar guards no case;
  • Sigils, presente no Ruby você também pode definir um array desta forma ~w(pt en es xx);
  • & &1, forma simplificada de definir uma função anônima e o &1 refere-se a o primeiro argumento dela, neste caso a língua (pt, en, es ou xx);

Função, Função, Função

As estruturas de linguagem são funções e você pode obter o retorno delas da seguinte forma:

input = "123"

result = if is_nil(input), do: 0, else: Integer.parse(input)
# {123, ""}

result2 = if is_binary(result), do: Integer.parse(result)
# nil

result3 = case result do
  {number, _} -> number
  _ -> :error
end

result4 = cond do
  is_atom(result3) -> nil
  true -> :error
end
# :error

A sintaxe de if, case e cond são funções com açúcar sintático, diferentemente do Clojure onde if é uma função e fica bem claro que você está trabalhando com o resultado da função. Na minha opinião prefiro o açúcar sintático, neste caso ele facilita muito a leitura e elegância do código 👔.

Pipe Operator

Para facilitar a compreensão do código quando há um pipeline de execução de função, o pipeline pega o resultado à esquerda e passa para a direita. Incrível! Este recurso deveria existir em todas as linguagens de programação. Há uma proposta de implementação para JavaScript 🤩, quem sabe um dia teremos de forma nativa!

defmodule Math do
  def sum(x, y), do: x + y
  def subtract(x, y), do: x - y
  def multiply(x, y), do: x * y
  def div(x, y), do: x / y
end

x = 
  1
  |> Math.sum(2)
  |> Math.subtract(1)
  |> Math.multiply(2)
  |> Math.div(4)
  
x
# (((1 + 2) -1) * 2) / 4)
# 1

Outros recursos

Concatenação strings

x = "hello"
y = "#{x} world"
z = x <> " world" 
# "hello world"

x = nil
"valor de x=#{x}"
# "valor de x="

Guards

São recursos utilizados para melhorar a correspondência de padrões, seja em condições ou funções:

defmodule Blank do
    def blank?(""), do: true
    def blank?(nil), do: true
    def blank?(map) when map_size(map) == 0, do: true
    def blank?(list) when Kernel.length(list) == 0, do: true
    def blank?(_), do: false
end

Enum.map(["", nil, %{}, [], %{foo: :bar}], & Blank.blank?(&1))
# [true, true, true, true, false]

require Logger

case {:info, "log message"} do
  {state, message} when state in ~w(info ok)a -> Logger.info(message)
  {state, message} when state == :warn -> Logger.warning(message)
  {state, message} -> Logger.debug(message)
end

# [info] log message

Erlang

Podemos acessar os recursos Erlang diretamente do Elixir da seguinte forma:

queue = :queue.new()
queue = :queue.in("message", queue)

:queue.peek(queue)
# {:value, "message"}
O Erlang possui um modulo para criação de uma filas em memória (FIFO) o Queue.

Bibliotecas e suporte

Elixir foi lançado em 2012 e é uma linguagem mais recente em comparação com Go, lançado em 2009. Encontramos muitas bibliotecas no repositório de pacotes Hex. O interessante há compatibilidade com pacotes Erlang e existem adaptações de pacotes conhecidos do Erlang para o Elixir.

Um exemplo é Plug.Cowboy, que usa o servidor web Cowboy de Erlang via Plug in Elixir, uma biblioteca para construir aplicativos por meio de funções usando vários servidores web Erlang.

Vale ressaltar que o Erlang é uma linguagem sólida e está no mercado há muito tempo, desde 1986, e o que não existir no Elixir provavelmente encontraremos em Erlang.

Existem contribuições diretas do criador da linguagem o José Valim, de outras empresas e muito trabalho da própria comunidade.

Abaixo temos bibliotecas e frameworks conhecidos no Elixir:

  • Phoenix, é um framework de desenvolvimento web escrito em Elixir que implementa o padrão MVC (Model View Controller) do lado do servidor.
  • Ecto, ORM do Elixir, um kit de ferramentas para mapeamento de dados e consulta integrada.
  • Jason, um analisador e gerador JSON extremamente rápido em Elixir puro.
  • Absinthe, A implementação de GraphQL para Elixir.
  • Broadway, crie pipelines simultâneos e de processamento de dados de vários estágios com o Elixir.
  • Tesla, é um cliente HTTP baseado em Faraday (Ruby);
  • Credo, ferramenta de análise de código estático para a linguagem Elixir com foco no ensino e consistência de código.
  • Dialyxir, pacote de Mix Tasks para simplificar o uso do Dialyzer em projetos Elixir.

Finalizando...

O objetivo do artigo era preparar um café expresso ☕, porém acabei moendo alguns grãos para extrair o que achei de bom no Elixir, com a intenção de compartilhar e trazer os detalhes a mesa, para quem tem curiosidade e vontade de entender um pouco mais sobre a linguagem e os conceitos de linguagem funcional. Certamente alguns tópicos foram esquecidos, seria impossível falar de Elixir em apenas um artigo 🙃, fica como débito técnico...

Por onde eu começo! Um café funcional com Elixir

Um forte abraço, Deus os abençoe 🕊️ e desejo a todos um Feliz Ano Novo.

Mantenham seu kernel 🧠 atualizado sempre.

Referências

]]>
<![CDATA[The steps to producing a legacy system]]>https://willsena.dev/the-steps-to-producing-a-legacy-system/656dd7141cd839000af26095Fri, 08 Dec 2023 23:13:15 GMTThe old and new legacy systemsThe steps to producing a legacy system

When we think about legacy systems, we typically think of systems developed in languages like Cobol, Clipper, Pascal, Delphi, Visual Basic, connected to old databases such as Paradox, DB2 and, Firebird.

Nowadays, it's a little different in an organization with multiple languages and projects. For example, Paypal opted to go from Java to Node years ago, and Twitter switched from Ruby to Java. With these examples, we can see that in the legacy context, we are dealing with modern languages, such as Ruby and Java. However, I don't think these developers were driven to change because they favored one language over another.

Refactoring as the solution?

Refactoring is becoming more popular for a particular group of engineers who prefer the hype language over others. I'm not here to pass judgment because I'm wearing this cover 🧙🏾 at a specific point in my career. But I should emphasize that refactoring is never the easiest or best way to solve an issue. As a developer that works with a range of programming languages, You will never find a bulletproof language that works so well for frontend and backend or bff (backend for frontend), that is amazing for mobile, that is lovely and comprehensible for concurrency, that is conformable to test, and so on...

Stop thinking about frameworks and start thinking about how this language will work in your project and ask some questions; the learning curve is good for other members; and consider how other people will solve issues with your produced project. Because if you don't care, you're creating the next legacy system.

Let's get started on a list for creating a stunning legacy system.

1) Languages with limited library support

Before deciding a programming language, evaluate what you plan to develop as a project first, and then whether your stack will be supported by the programming language, for example:

I'd like to create a project involving machine learning or data science. Python, as you may know, is widely used for these reasons and has strong commercial and community support. We may be able to find additional Java or Node libraries, but you will almost certainly have to get your hands dirty translating library behaviors and providing some compatibility.

I'm not arguing that it's completely wrong to use language A or B; you can chose, but you should consider the advantages and downsides. And this decision is sound in the long run when your team chooses to move to another language since there is no support for building quickly, because nowadays you must release fast or your solution design may become deprecated.

The steps to producing a legacy system

2) Use a framework that updates slowly

Nowadays, languages support a wide range of databases, services, and integrations, but occasionally there is limited support, or the community does not generate active updates based on your requirements. That condition is common, for example: NPM, RubyGems and Hex packages without updates for months or years.

Some projects are mature and there is no requirement to update them so frequently, but there comes a specific point when the project is supported by three core committers, each of whom has their own priorities. So, in that case, you must work with these dependencies and collaborate on open-source projects to solve issues or improve security; so, before establishing a framework, list the dependencies as clearly as possible.

Therefore, if open-source efforts exceed commercial efforts, your team may switch from one framework or language to another, introducing legacy systems.
The steps to producing a legacy system

3) Don't think about concurrency or performance.

We commonly hear monolith first and keep it simple, which is a genuine and reliable technique for launching your MVP as soon as possible, but be careful to maintain things simple enough to level up when necessary. I'm not advocating putting reuse ahead of usage, but don't make things too dependent on a framework. Some abstractions with low utilization will allow you to upgrade when "concurrency" comes knocking at your door demand for more performance.

The steps to producing a legacy system
The "performance" is at the door.

4) Avoid writing tests or maintaining adequate coverage.

Coverage tests are a sensitive topic; I've heard that quantity isn't necessarily better than quality, but less coverage is always worse. You should not write code that lacks appropriate coverage; instead, you should enumerate possible cases to provide coverage; that is every developer's duty. Assume you are a developer for an airline system; is less coverage acceptable? Okay, I took it seriously. But if we get a system with no testing and a bad design we should replace it as quickly as possible.

The steps to producing a legacy system

However, these systems will occasionally live a long time if they work properly and don't interfere with the strong performance of the stack, and the team has no plans to touch these poorly built systems. A system without testing is a good approach to start a legacy system, in my opinion.

5) Write in a new language in the same way you do in previous languages.

It is important to note that approaches and patterns can be used to any language, but you should be aware of the two paradigms commonly used when developing a project, the most well-known of which is Object-Oriented Programming (OOP), and another strong paradigm known as Functional Programming (FP). While OOP emphasizes class management and reusing behaviors, FP tackles modules, functions, and immutability, thus comparing different approaches. I propose using a well-known project's design as a guide when developing your project in a new language, because it's common and comprehensible to write code for a language and then have another person look at it and remark...

The steps to producing a legacy system
This code appears to be in another language...

To summarize, writing code in a new language is challenging, especially when establishing a new project, but it is a worthwhile experience. If you did your homework and chose this challenge, try to develop small initiatives first; it's not time to rewrite all behaviors that you consider legacy.

Remember that your baby system could become legacy at the same rate that a npm package is released. 😆

6) Write code for yourself rather than for your team.

I believe this happens more frequently than it should. We should not think of of code as abstract art because abstract art is about feelings and is hard to comprehend. Don't let hype influence how you construct your application; the coding should be as straightforward as history.

When coding, try to use well-known and rock-solid methodologies such as SOLID. If you develop a project for yourself, someone will look at it months or years later and say it's too complex, and it's time to retire it and...

The steps to producing a legacy system
A new legacy was born. To replace the legacy system features, a new system will be released.

Final thoughts

In this article, I discuss things to think about while developing a new system with new behaviors or refactoring behaviors from an existing production system. Sometimes a new system is successful, as proved by metrics and team, but other times a redesign works well for a brief amount of time and a new design is required.

To recap, I am not arguing that we should not test hype languages or frameworks, but rather that when you want to bring these new techniques to your team, you should do your homework, ask questions, create proof of concept (PoC), and test metrics to avoid replacing one issue with another.

Thank you for investing your time in reading; God 🕊️ be with you and keep your kernel 🧠 updated, bye.

]]>
<![CDATA[Fixing a Bumblebee issue after installing Manjaro Linux]]>https://willsena.dev/fixing-a-bumbebee-issue-after-installing-manjaro-linux/656514d2b35923000b2bd409Tue, 28 Nov 2023 09:37:50 GMT

For a brief period, I changed my configuration to use Optimus Manager instead of Bumblebee because I couldn't get my NVIDIA GPU to work with command optirun. I must admit that this always worked when I used Debian distro bases, but it's not a big deal for me to keep using Debian distros,

I love them as containers or production instances, but not on my desktop, I'm not a big fan of extra apt repositories, after updates some incompatibilities happen and packages may break, for me it's frequent because I'm a developer and I use so many packages 📦.

First, define Bumbleebee.

Fixing a Bumblebee issue after installing Manjaro Linux
Okay that's it!

In the context of GPUs (Graphics Processing Units) and Linux, Bumblebee refers to a project that allows you to use a system's dedicated GPU (typically NVIDIA) for producing graphics while still using the integrated GPU for less demanding tasks. This is especially beneficial in laptops when running applications that do not require the full capabilities of the dedicated GPU.

Bumblebee's principal application is in laptops with dual GPUs, which include an integrated GPU (like Intel's integrated graphics) and a separate GPU (like NVIDIA). Bumblebee enables switching between various GPUs dependent on the application's graphical processing power requirements.

Here's an overview of how Bumblebee works:

  • Integrated GPU (for example, Intel): Handles basic rendering and the desktop environment.
  • Dedicated GPU (e.g., NVIDIA): Remains in a low-power state until a more graphics-intensive task requires it.
  • Optimus Technology (NVIDIA): This is a technology that enables for the seamless switching of integrated and dedicated GPUs depending on the workload.
  • Bumblebee: Serves as a bridge between Optimus Technology and the Linux operating system. It enables selected apps to use the dedicated GPU while keeping the rest of the system on the integrated GPU to save power.

It's worth noting that you can use simply NVIDIA while skipping your integrated GPU, but you'll sacrifice battery life for graphical acceleration.

If you purchased a Dell laptop 5 years ago, you may have chosen an Inspiron model with a dedicated GeForce GPU and integrated Intel Graphics. Nowadays, AMD Ryzen with GPU solves your challenges, and you don't require Bumblebee 😭.

Requirements

First, ensure that your Bumblebee service is running and in good health.

sudo systemctl status bumblebeed
● bumblebeed.service - Bumblebee C Daemon
     Loaded: loaded (/usr/lib/systemd/system/bumblebeed.service; enabled; preset: disabled)
     Active: active (running) since ...

You should start the service if it is not already operating.

# checkout logs
journalctl -u bumblebeed

# running service
sudo systemctl start bumblebeed

Using the optirun or primusrun commands

Both optirun and primusrun are commands that work in tandem with the Bumblebee project, which enables dynamic switching between integrated and dedicated GPUs on laptops equipped with NVIDIA Optimus technology. These commands accomplish similar tasks but differ in terms of performance and how they handle the rendering process.

  1. optirun: This Bumblebee project command is used to run a program with the dedicated GPU. It employs VirtualGL as a bridge, rendering graphics on the dedicated GPU before sending the output to the integrated GPU for display. The disadvantage is that the method includes duplicating frames between GPUs, which can increase overhead and degrade speed. Wine and Crossover, for example, may not work correctly in this way.
  2. primusrus: This command is part of the Bumblebee project as well, but it takes a different approach. It employs Primus as a VirtualGL backend in order to reduce the overhead involved with copying frames between GPUs. When compared to optirun, Primus seeks to improve performance by providing a more efficient approach to handle the rendering process, resulting in higher frame rates for GPU-intensive applications. Improved support for Wine and Crossover apps.

The file /etc/bumblebee/xorg.conf.nvidia that follows is an exact representation of the default settings generated by your Linux distribution, in this instance Manjaro Hardware Detection (mhwd).

##
## Generated by mhwd - Manjaro Hardware Detection
##

Section "ServerLayout"
    Identifier "Layout0"
    Option "AutoAddDevices" "false"
EndSection

Section "ServerFlags"
  Option "IgnoreABI" "1"
EndSection

Section "Device"
    Identifier  "Device1"
    Driver      "nvidia"
    VendorName "NVIDIA Corporation"
    Option "NoLogo" "true"
    Option "UseEDID" "false"
    Option "ConnectedMonitor" "DFP"
EndSection

Before performing the testing command, install glxgears from the mesa-utils package:

# manjaro pamac
sudo pamac install mesa-utils

# with pacman
sudo pacman -S mesa-utils

For tackling the issue, let us run primusrun and optirun with an application to test the graphics card.

optirun glxgears --info

primusrun glxgears --info

The result should be an issue bellow:

  • Optirun
[ 9`634.005329] [ERROR]Cannot access secondary GPU - error: [XORG] (EE) No devices detected.

[ 9634.005368] [ERROR]Aborting because fallback start is disabled.
  • Primusrun
primus: fatal: Bumblebee daemon reported: error: [XORG] (EE) No devices detected.

This problem occurred because the NVIDA configuration did not include the required BusID device for the dedicated GPU.

The command below will return your BusID devices.

lspci

The result:

08:00.0 3D controller: NVIDIA Corporation GK208BM [GeForce 920M] (rev a1)
08:00.1 Audio device: NVIDIA Corporation GK208 HDMI/DP Audio Controller (rev a1)

In this scenario, 08:00.0 is the BusID device for my NVIDIA dedicated GPU. So, in the configuration file /etc/bumblebee/xorg.conf.nvidia, specify this reference at section device (BusID "PCI:08:00:0"):

⚡ Please replace the dot with a colon in the suffix BusID. From PCI:08:00.0 to PCI:08:00:0
sudo nano /etc/bumblebee/xorg.conf.nvidia
##
## Generated by mhwd - Manjaro Hardware Detection
##
 
 
Section "ServerLayout"
    Identifier "Layout0"
    Option "AutoAddDevices" "false"
EndSection

Section "ServerFlags"
  Option "IgnoreABI" "1"
EndSection

Section "Device"
    Identifier  "Device1"
    Driver      "nvidia"
    VendorName "NVIDIA Corporation"
    Option "NoLogo" "true"
    Option "UseEDID" "false"
    Option "ConnectedMonitor" "DFP"
    BusID "PCI:08:00:0"
EndSection

We can now execute commands without trouble.

optirun glxgears --info

primusrun glxgears --info
Fixing a Bumblebee issue after installing Manjaro Linux

Final thoughts

Today's article addresses a typical problem that happens after installing Manjaro if you have multiple Graphic Cards and use Bumblebee to manage them. Arch distributors may be aware of this issue. Thank you for reading. I hope this article helped you solve your problem and that you enjoy your games.

God bless 🕊️ your day and your kernel 🧠, and I hope to see you soon.

References

]]>
<![CDATA[Exploring Awesome WM, my preferred window manager]]>https://willsena.dev/exploring-awesome-wm-my-preferred-window-manager/655bb6fbe8f82c000b0b5097Mon, 20 Nov 2023 21:14:35 GMT

A few weeks ago, I decided to switch from my latest desktop (Bugdie) to window managers. During my early experiences with Conectiva, Mandrake, and Slackware, I used Blackbox and thought it was fantastic, but I didn't know how to configure things at the time.

So I went back to KDE after trying GNOME, Deepin, Pantheon, XFCE and Bugdie. I became a distro Hopper 🕵️, looking into Desktop and enjoying and hating desktop behaviors. I never found a desktop that was comfy for me.

Why am I here using AwesomeWM rather than I3, BSPWM, XMonad, and other options? I haven't tried any of them yet, but the default theme and menu are similar to the old Blackbox and Fluxbox, and I'm not a Lua highlander developer 🔥, but the language is too simple, and in a few weeks I discovered various references at Git and significant AwesomeWM instructions, for developing my own dotfiles and widgets.

Before diving into AwesomeWM, let's take a quick look at Window Managers.

Window Manager

Is a software component that handles the placement and appearance of windows in an operating system's graphical user interface (GUI). It is in charge of managing the graphical elements on the screen, such as windows, icons, and other graphical elements, and it allows the user to interact with these items.

There are various types of window managers, which can be essentially divided into two categories:

Stacking Window Managers: These allow windows to overlap and allow the user to bring any window to the foreground by clicking on it: Blackbox, Openbox, Fluxbox and Window Maker are examples of stacking window managers

Tiling Window Managers: Tiling window managers organize windows so that they do not overlap. They tile windows to fill the available screen space automatically, which can be more efficient for certain tasks. Instead of using a mouse, users often browse between windows using keyboard shortcuts: I3, Awesome, BSPWM, DWM, XMonad, QTile and Hyprland are examples of tiling window managers.

It's important to note that tiling manager features arrive to come, so desktops as GNOME and KDE introduced tiling features.

AwesomeWM

Exploring Awesome WM, my preferred window manager
At first, this is "Awesome WM" without any cosmetic changes.

Awesome Window Manager is a highly customizable, dynamic tiling window manager for the X Window System (the windowing system used by Linux and other Unix-like operating systems). It is intended to be incredibly versatile and adaptable, giving users complete control over the layout and appearance of their desktop environment.

You can do everything you want with done widgets or by yourself with Lua development. I admit that I mixed the two. Creating something from scratch requires too much of you, and you can grow bored fixing and running it for too long, therefore I decided to base my theme on CopyCats. Because there are so many specifications involved in an environment, such as network, CPU, RAM, Graphic Card, Sound Card, Microphone, and so on, certain things may not work at first or with default settings.

CopyCats is a collection of themes on which you can tweak or build your own.

Template file


AwesomeWM has a file called rc.lua that contains all of the rules, behaviors, and styles for windows. This file contains comments that separate the template settings into sections.

The file is located by default at /etc/xdg/awesome/rc.lua, and you must copy it to your home location to make your changes.

sudo cp /etc/xdg/awesome/rc.lua $HOME/.config/awesome/00.rc.lua

I have no intention of describing a full template, but I will highlight key areas to demonstrate how AwesomeWM works.

You can attach instructions or functions to a menu or sub-menu, but keep in mind that you can use launcher to execute your apps, of which I choose Rofi.

-- {{{ Menu
-- Create a launcher widget and a main menu
myawesomemenu = {
   { "hotkeys", function() hotkeys_popup.show_help(nil, awful.screen.focused()) end },
   { "manual", terminal .. " -e man awesome" },
   { "edit config", editor_cmd .. " " .. awesome.conffile },
   { "restart", awesome.restart },
   { "quit", function() awesome.quit() end },
}

Tags (environments)

You can have as many environments as you desire, and you can access them with the (⊞ window key + arrows) shortcut.

screen.connect_signal("request::desktop_decoration", function(s)
    -- Each screen has its own tag table.
    awful.tag({ "1", "2", "3", "4", "5", "6", "7", "8", "9" }, s, awful.layout.layouts[1])
end)

Keybindings

You can define or edit any keybinding. The shortcut (⊞ window key + s) displays guidelines for all shortcuts defined in your template, which is very useful.

awful.keyboard.append_global_keybindings({
    awful.key({ modkey,           }, "s",      hotkeys_popup.show_help,
              {description="show help", group="awesome"})
})

Bars

Wibar is highly adaptable; you may specify a widget or group, as well as determine alignment, spacing, and margins. I tried using Polybar at first, but I didn't like the outcome. However, if you want to switch to another Window Manager, Polybar works in the majority of them.

s.mywibox = awful.wibar {
position = "top",
screen   = s,
widget   = {
    layout = wibox.layout.align.horizontal,
    { -- Left widgets
        layout = wibox.layout.fixed.horizontal,
        mylauncher,
        s.mytaglist,
        s.mypromptbox,
    },
    s.mytasklist, -- Middle widget
    { -- Right widgets
        layout = wibox.layout.fixed.horizontal,
        mykeyboardlayout,
        wibox.widget.systray(),
        mytextclock,
        s.mylayoutbox,
    },
}

Now we'll look at templates. I don't recommend employing templates as your final work; instead, separate them into numerous files. Divide and conquer is usually a good method for organizing and keeping your code as professional as possible. Everything in the template is grouped together on purpose to present all settings in a single file; this file is a dump.

Compositor

In the same way that XFCE utilizes compiz to add blur, transparency, and graphical effects to windows, we must use picom to add cosmetic features to Awesome WM.

My Awesome WM theme

After weeks of working in this environment, I created something I enjoyed; there is still more work to be done, but I'm happy with my shortcuts and environment feedback. My son accompanied me on this journey and was continuously saying to me, "let me see your little top bar" or "barrinha" in Portuguese.

This theme was named Ebenezer 🪨, which meaning "stone of helper.".

The quote is from I Samuel 7. After defeating the Philistines, Samuel raises his Ebenezer, declaring that God defeated the enemies on this spot. As a result, "hither by thy help I come." So I hope this stone helps you in your environment and, more importantly, in your life. 🙏🏿

Of course, this top-bar is inspired by others, but I keep what's really important to me to monitor and track, memory, temperature, and CPU, and when something exceeds the indicators colors change, similar to our dashboards at developer context. The battery exhibits the same expected behavior.

Exploring Awesome WM, my preferred window manager

I appreciate the idea of keybindings, but for some tasks, I utilize mouse behaviors, such as microphone muting, opening wifi-manager, and tool tips that provide useful information such as wifi signal, current brightness, and battery state.

Following that, I'll explain what this implementation does, and you can clone it if you like; this theme is incredibly adaptable. I'm attempting to keep everything changeable via ini files, therefore there's a file called config.ini where you can customize style and behaviors.

The config.ini

[environment]
modkey=Mod4
weather_api_key=api_weather_key # openweathermap.org
city_id=your_city_id # openweathermap.org
logo=$THEMES/icons/tux.png
logo_icon=
logo_icon_color=#34be5b
wallpaper_slideshow=on # [off] wallpaper solo
wallpaper=$HOME/Pictures/Wallpapers/active.jpg # wallpaper solo
wallpaper_dir=$HOME/Pictures/Wallpapers # when wallpaper_slideshow=on you should inform wallpapers directory 
terminal=kitty
editor=nano
icon_theme="Papirus"
icon_widget_with=22

[commands]
lock_screen=~/.config/i3lock/run.sh
brightness_level=light -G
brightness_level_up=xbacklight -inc 10
brightness_level_down=xbacklight -dec 10
power_manager=xfce4-power-manager --no-daemon
network_manager=nm-connection-editor
cpu_thermal=bash -c "sensors | sed -rn \"s/.*Core 0:\\s+.([0-9]+).*/\1/p\""
click_logo=manjaro-settings-manager
volume_level=pactl list sinks | grep '^[[:space:]]Volume:' | head -n $(( $SINK + 1 )) | tail -n 1 | sed -e 's,.* \([0-9][0-9]*\)%.*,\1,'

[wm_class]
browsers=firefox chromium-browser microsoft-edge
editors=code-oss sublime atom

[tags]
list=     
browsers=1
terminal=2
editors=3
games=4
files=5
others=6

[topbar]
left_widgets=tag_list separator task_list
right_widgets=weather cpu_temp cpu mem arrow arrow_volume arrow_microphone arrow_network arrow_battery arrow_systray arrow_pacman arrow_brightness arrow_logout arrow_layoutbox

[startup]
picom=picom --config $THEMES/picom.conf
lock_screen=light-locker --lock-after-screensaver=10 &
desktop_policies=lxpolkit # default file polices (open files from browser)
multiple_screen=exec ~/.config/xrandr/startup.sh "1366x768" # type xrandr to check supported mode
mouse_reset=unclutter

[fonts]
font=Fira Code Nerd Font Bold 10
font_regular=Fira Code Nerd Font Medium 9
font_light=Fira Code Nerd Font Light 10
font_strong=Fira Code Nerd Font 12
font_strong_bold=Inter Bold 12
font_icon=Fira Code Nerd Font 11

[colors]
fg_normal=#e0fbfc
fg_focus=#C4C7C5
fg_urgent=#CC9393
bg_normal=#263238
bg_focus=#1E2320
bg_urgent=#424242
bg_systray=#e0fbfc
bg_selected=#5c6b73
fg_blue=#304FFE
fg_ligth_blue=#B3E5FC
fg_yellow=#FFFF00
fg_red=#D50000
fg_orange=#FFC107
fg_purple=#AA00FF
fg_purple2=#6200EA
fg_green=#4BC1CC
bg_topbar=#253237
bg_topbar_arrow=#5c6b73
border_color_normal=#9db4c0
border_color_active=#c2dfe3
border_color_marked=#CC9393
titlebar_bg_focus=#263238
titlebar_bg_normal=#253238

As you can see, there are numerous settings, however I must admit that there are numerous items to include in this file.

Features

Changing the wallpaper when using the slide show mode

Exploring Awesome WM, my preferred window manager

Screenshot desktop, window, delayed and area

The screenshot default place is $HOME/Pictures/Screenshots
Exploring Awesome WM, my preferred window manager

Notifications feedback's

Exploring Awesome WM, my preferred window manager

Launcher (rofi)

Exploring Awesome WM, my preferred window manager

Lock screen (i3lock)

Exploring Awesome WM, my preferred window manager

Tooltip

Exploring Awesome WM, my preferred window manager

Terminal

Exploring Awesome WM, my preferred window manager

🎮 Not only a coder, but also a daddy developer, I was playing Roblox with my son while Wine Vinegar consuming and harming my CPU.

Exploring Awesome WM, my preferred window manager

Features development

Some features were created from scratch, while others were discovered on Github and changed to my way and style.

If you feel the same way I do, where KDE, GNOME, Mate, XFCE, Cinnamon are too much for you, go with Awesome WM, which is my dotfiles:

GitHub - williampsena/dotfiles: This repository includes my dotfiles for Awesome Window Manager.
This repository includes my dotfiles for Awesome Window Manager. - GitHub - williampsena/dotfiles: This repository includes my dotfiles for Awesome Window Manager.
Exploring Awesome WM, my preferred window manager

That's all folks

In this piece, I'll explain how Awesome WM works and give some useful dotfiles. As a developer, I hope you find this material valuable on a daily basis.

I'll see you again soon, and please keep your kernel 🧠 up to date and God bless 🙏🏿 you.

]]>
<![CDATA[Using the graceful shutdown approach to dispose of applications]]>https://willsena.dev/using-graceful-shutdown-approach-to-dispose-of-applications/652f1dcb727569000aeb0d59Wed, 18 Oct 2023 00:09:40 GMT

Graceful shutdown is a process that is well stated in twelve factors; in addition to keeping applications with 🏁 fast and furious launch, we need be concerned with how we dispose of every application component. We're not talking about classes and garbage collector. This topic is about the interruption, which could be caused by a user stopping a program or a container receiving a signal to stop for a scaling operation, swapping from another node, or other things that happen on a regular basis while working with containers.

Imagine an application receiving requests for transaction payments and an interruption occurs; this transaction becomes lost or incomplete, and if retry processing or reconciliation is not implemented, someone will need to push a button to recover this transaction...

Using the graceful shutdown approach to dispose of applications
We should agree that manual processing works at first, but every developer knows the end...

How does graceful shutdown work?

When your application begins to dispose, you can stop receiving more demands; these demands could be a message from a queue or topic; if we're dealing with workers, this message should return to the queue or topic; Rabbit provides a message confirmation (ACK) that performs a delete message from the queue that is successfully processed by the worker. In container contexts, this action should be quick to avoid a forced interruption caused by a long waiting time.

Show me the code!

You may get the source code from my Github repository.

The following code shows a basic application that uses signals to display Dragon Ball 🐲 character information every second. When interruption signal is received the timer responsible to print messages per second is stopped. In this example, we're using simple timers, but it could also be a web server or a worker connected into a queue, as previously said. Many frameworks and components include behaviors for closing and waiting for incoming demands.

  • app.go
package main

import (
	"encoding/csv"
	"fmt"
	"math/rand"
	"os"
	"os/signal"
	"syscall"
	"time"
)

const blackColor string = "\033[1;30m%s\033[0m"

var colors = []string{
	"\033[1;31m%s\033[0m",
	"\033[1;32m%s\033[0m",
	"\033[1;33m%s\033[0m",
	"\033[1;34m%s\033[0m",
	"\033[1;35m%s\033[0m",
	"\033[1;36m%s\033[0m",
}

type Character struct {
	Name        string
	Description string
}

func main() {
	printHello()

	sigs := make(chan os.Signal, 1)
	signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM)

	fmt.Println("Starting random Dragon Ball characters service...")

	shutdown := make(chan bool, 1)

	go func() {
		sig := <-sigs
		fmt.Println()
		fmt.Println(sig)
		shutdown <- true
	}()

	characterSize, characterList := readFile()

	quit := make(chan struct{})

	go func() {
		ticker := time.NewTicker(5 * time.Second)
		for {

			select {
			case <-ticker.C:
				printMessage(characterSize, characterList)
			case <-quit:
				ticker.Stop()
				return
			}
		}
	}()

	<-shutdown

	close(quit)

	fmt.Println("Process gracefully stopped.")
}

func printHello() {
	dat, err := os.ReadFile("ascii_art.txt")

	if err != nil {
		panic(err)
	}

	fmt.Println(string(dat))
}

func readFile() (int, []Character) {
	file, err := os.Open("dragon_ball.csv")

	if err != nil {
		panic(err)
	}

	csvReader := csv.NewReader(file)
	data, err := csvReader.ReadAll()

	if err != nil {
		panic(err)
	}

	characterList := buildCharacterList(data)

	file.Close()

	return len(characterList), characterList
}

func buildCharacterList(data [][]string) []Character {
	var characterList []Character

	for row, line := range data {
		if row == 0 {
			continue
		}

		var character Character

		for col, field := range line {
			if col == 0 {
				character.Name = field
			} else if col == 1 {
				character.Description = field
			}
		}

		characterList = append(characterList, character)
	}

	return characterList
}

func printMessage(characterSize int, characterList []Character) {
	color := colors[rand.Intn(len(colors))]
	characterIndex := rand.Intn(characterSize)
	character := characterList[characterIndex]

	fmt.Printf(color, fmt.Sprintf("%s %s", "🐉", character.Name))
	fmt.Printf(blackColor, fmt.Sprintf(" %s\n", character.Description))
}
  • go.mod
module app

go 1.20

Code Highlights

  • This code block prepares the application to support signals; shutdown is a channel that, when modified, triggers an execution block for disposal.
	sigs := make(chan os.Signal, 1)
	signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM)

	shutdown := make(chan bool, 1)

	go func() {
		sig := <-sigs
		fmt.Println()
		fmt.Println(sig)
		shutdown <- true
	}()
  • The ticker is in charge of printing messages every 5 seconds; when it receives a signal from the quit channel, it stops.
quit := make(chan struct{})

go func() {
    ticker := time.NewTicker(5 * time.Second)
    for {

        select {
        case <-ticker.C:
            printMessage(characterSize, characterList)
        case <-quit:
            ticker.Stop()
            return
        }
    }
}()
  • The ticker is closed by "quit channel" after receiving a signal halting the application's execution.
<-shutdown

	close(quit)

	fmt.Println("Process gracefully stopped.")

Graceful Shutdown working

When CTRL+C is pressed, the application receives the signal SIGINT, and disposal occurs, the following command will launch the application.

go run app.go

Using the graceful shutdown approach to dispose of applications

Containers

It's time to look at graceful shutdown in the container context; in the following file, we have a container image:

  • Containerfile
FROM docker.io/golang:alpine3.17

MAINTAINER [email protected]

WORKDIR /app

COPY ./graceful_shutdown go.mod /app

RUN go build -o /app/graceful-shutdown

EXPOSE 3000

CMD [ "/app/graceful-shutdown" ]

Let's build a container image:

docker buildx build -t graceful-shutdown -f graceful_shutdown/Containerfile .

# without buildx
docker build -t graceful-shutdown -f graceful_shutdown/Containerfile .

# for podmans
podman build -t graceful-shutdown -f graceful_shutdown/Containerfile .

The following command will test the execution, logs, and stop that is in charge of sending signals to the application; if no signals are received, Docker will wait a few seconds and force an interruption:

docker run --name graceful-shutdown -d -it --rm graceful-shutdown
docker logs -f graceful-shutdown

# sent signal to application stop
docker stop graceful-shutdown 

# Using 
# Podman

podman run --name graceful-shutdown -d -it --rm graceful-shutdown
podman logs -f graceful-shutdown

# sent signal to application stop
podman stop graceful-shutdown 

That's all folks

In this article, I described how graceful shutdown works and how you may apply it in your applications. Implementing graceful shutdown is part of a robust process; we should also examine how to reconcile a processing when a server, node, or network fails, so we should stop thinking just on the happy path.

I hope this information is useful to you on a daily basis as a developer.

I'll see you next time, and please keep your kernel 🧠 updated.

References

]]>
<![CDATA[Turbinando a instalação dos pacotes NPM com o PNPM]]>https://willsena.dev/turbinando-a-instalacao-dos-pacotes-npm-com-pnpm/64f05ad6fe2c1e000938ef8bThu, 31 Aug 2023 13:52:15 GMT

Anos atrás, eu estava entre o NPM e o Yarn, uma questão comum entre desenvolvedores Node.js. Eu sempre mantive ambos instalados e depois de uma melhoria de performance no NPM, o Yarn virou plano B, "se o NPM falhar bora de Yarn".

Em busca de conhecimento 👽, encontrei o PNPM há algum tempo e resolvi testá-lo. Notei uma melhoria gigante na instalação de pacotes e o mais interessante é que essa estratégia aprimorada de reuso de pacotes do projeto, reduz o espaço em disco consumido pelos pacotes.

PNPM

É um gerenciador de pacotes que lida com instalação, atualização e remoção de pacotes. Esses pacotes contêm principalmente JavaScript/Typescript e assets utilizados por seus diversos bundlers. Assim como Yarn o registry NPM é usado pelo PNPM.

Melhorias

Redução no tempo de instalação e tamanhos dos pacotes são pontos positivos do PNPM, essas características ajudam e muito, desde a criação de imagens de container até o ambiente de desenvolvimento.

Hoje em dia instalamos um pacote NPM que instala uma dependência que tem outra dependência e você acaba colocando tanta coisa no seu projeto que só percebe no seu lock file.

Turbinando a instalação dos pacotes NPM com o PNPM

Por que o PNPM?

Não estou dizendo que devemos parar de usar Yarn ou NPM, ambos funcionam, mas quando há aqueles projetos com vários pacotes e usando um bundler com diversas dependências enraizadas e comuns...

Podemos dizer que com o PNPM, seus problemas acabaram e eu agarantio...
Turbinando a instalação dos pacotes NPM com o PNPM

Quem utiliza o PNPM?

O projeto é bem maduro e utilizado por grandes empresas.

Turbinando a instalação dos pacotes NPM com o PNPM

Instalando PNPM

Bora vamos instalar o PNPM, existem vários métodos disponíveis, como em todo ambiente Node.js, o NPM já está instalado, podemos instalá-lo através dele assim como o Yarn ou uma instalação desvinculada.

# powershell (windows)
iwr https://get.pnpm.io/install.ps1 -useb | iex

# using curl
curl -fsSL https://get.pnpm.io/install.sh | sh -

# using wget
wget -qO- https://get.pnpm.io/install.sh | sh -

# using npm
npm install -g pnpm

Testes

Abaixo iremos testar a instalação da biblioteca React e suas dependências:

mkdir /tmp/pnpm-test
cd /tmp/pnpm-test
npm init -y foo bar

pnpm install react @testing-library/react @testing-library/jest-dom

O comando abaixo lista links simbólicos, ou seja, uma forma de validar o reuso de pacotes no projeto.

ls -lhaF node_modules | grep ^l

As bibliotecas usadas tem o React como um pacote comum, que será reutilizado,  o que é uma das estratégias do PNPM para tornar o processo de instalação mais rápido e reduzindo o uso do armazenamento, conforme mencionado anteriormente, os pacotes serão armazenados uma única vez, abaixo temos uma referência de link simbólico para node_modules/react.

.pnpm/[email protected]/node_modules/react/

Não que eu goste de projetos que compartilham uma lista enorme de dependências, mas os benefícios são ainda maiores com o PNPM.

Performance

O site oficial do PNPM fornece benchmarks entre gerenciadores de pacotes, recursos e tempos de execução.

Turbinando a instalação dos pacotes NPM com o PNPM

Compatibilidade

O PNPM é compatível com o Node.js desde a versão 16x, hoje estamos na versão 18x como LTS, segue abaixo uma tabela de compatibilidade considerando a versão LTS como partida.

Node.js pnpm 5 pnpm 6 pnpm 7 pnpm 8
18 ?
20 ? ?

Recursos

Esta lista de recursos e comparação foram retiradas do site oficial.

Feature pnpm Yarn npm
Workspace support ✔️ ✔️ ✔️
Isolated node_modules ✔️ - The default ✔️ ✔️
Hoisted node_modules ✔️ ✔️ ✔️ - The default
Autoinstalling peers ✔️ ✔️
Plug'n'Play ✔️ ✔️ - The default
Zero-Installs ✔️
Patching dependencies ✔️ ✔️
Managing Node.js versions ✔️
Has a lockfile ✔️ - pnpm-lock.yaml ✔️ - yarn.lock ✔️ - package-lock.json
Overrides support ✔️ ✔️ - Via resolutions ✔️
Content-addressable storage ✔️
Dynamic package execution ✔️ - Via pnpm dlx ✔️ - Via yarn dlx ✔️ - Via npx
Side-effects cache ✔️
Listing licenses ✔️ - Via pnpm licenses list ✔️ - Via a plugin

Principais comandos

pnpm npm yarn
pnpm install npm install yarn install
pnpm install react npm install react yarn add react
pnpm uninstall react npm uninstall react yarn remove react
pnpm store prune npm cache clean --force yarn cache clean

Maiores informações na documentação oficial.

Fim!

O objetivo deste artigo foi demonstrar uma opção rápida de gerenciar seus pacotes, estejam eles instalados em seu ambiente de desenvolvimento ou em seu container. Outra vantagem é que o lock file PNPM possui sintaxe yaml, o que torna o arquivo mais fácil de ler em minha humilde opinião.

Deus abenç️️oe 🕊️ uma excelente semana para você! ️

Mantenha seu kernel 🧠 atualizado.

Referências

]]>
<![CDATA[Stop making assumptions! Extract metrics from Kong API Gateway using Grafana and Prometheus]]>https://willsena.dev/stop-making-assumptions-extract-metrics-from-kong-api-gateway-using-grafana-and-prometheus/64d0d5975640d70006bd8a6aMon, 07 Aug 2023 12:00:45 GMT

I was chatting to my brother about Kong, cluster, deployment, and how to extract metrics a few days ago. Then I'm here to show you how to use Grafana and Prometheus to collect metrics from Kong API Gateway. Metrics are a good approach to get some data to detect bottlenecks and make improvements to your stack.

Kong

If you are unfamiliar with Kong API Gateway, I recommend reading my prior article:
Having a date with Kong, the most popular API gateway
Kong is built on NGINX and uses the Lua module to support your plugins; you could now use Python, Go, and Javascript in your custom plugins.
Stop making assumptions! Extract metrics from Kong API Gateway using Grafana and Prometheus

Kong an API Gateway is built on NGINX and allows plugins through the Lua module; however, custom plugins can now use Python, Go, and Javascript.

Kong offers two editions: community and enterprise. Private plugins, dashboard, admin, and hosting plans are available with the Enterprise edition. We must address this hole in the Community edition with Open Source Solutions; if you need a UI admin, Konga is a possibility, and you can discover some Kong plugins if the core plugins do not support your solution.

Grafana

Observability has never been more essential than right now. When working with clusters, instances with varying specs, containers, and programming languages. Analyzing metrics and displaying them in dashboards is the best way to determine what works for our stack.

Grafana is an open source analytics and monitoring tool that lets us send measurements from a wide range of data sources, including Prometheus, InfluxDB, PostgreSQL, MySQL, MSSQL, Azure Monitor, Google Cloud Monitoring, AWS CloudWatch, ElasticSearch, and others.

The most essential aspect of all metrics configuration is that we may establish alerts for Application or Business metrics:

Business

  • How many payments did we receive per minute?
  • How many orders are canceled per day?
  • How many claims are made every hour?


Application

  • How many RPS (Request per Second) do we have?
  • How much CPU and RAM does the application require?
  • How many messages are waiting to be processed in our queue?
  • What about the http status of services?

With these statistics, we may say, "Good game, my bro!"

Stop making assumptions! Extract metrics from Kong API Gateway using Grafana and Prometheus

Prometheus

Prometheus is open source software that was created by SoundCloud in 2012. After Kubernetes, the Cloud Native Computing Foundation selected Prometheus as its second incubated project in May 2016.

The software supports event monitoring and alerting, however it lacks Grafana's excellent dashes and full-featured UI.

The example

This time, we'll build an API Gateway that supports both external APIs for retrieving values from (money and coins) and local APIs for demonstrate how Kong load balancing works with upstreams and self-healing (health-checks).

Stop making assumptions! Extract metrics from Kong API Gateway using Grafana and Prometheus
This image shows how Kong handles requests
  • api routes, by default, Kong listens on port 8000 for established API routes.
  • admin routes, Kong allows us to handle routes, services, and plugins on port 8001, but some features will only operate in database mode, which accepts PostgreSQL and Cassandra.
  • status routes, used to span metrics by available plugins such as: Prometheus.
  • invalid-whoami-2, this is an invalid upstream that illustrates how an upstream operates when there is an unhealth scenario and what metrics are available.
  • load balancing, is only used by whoami; other routes use upstreams with only one host and no fault tolerance.

How does the integration of Grafana, Prometheus, and Kong work?

The drawings below show how metrics are translated to Dash and Alerts in Grafana using data that was initially created in Kong and set to Prometheus.

Stop making assumptions! Extract metrics from Kong API Gateway using Grafana and Prometheus
Grafana, Prometheus and Kong integration

Let's look at the assets files

Please clone this repository; this example contains a lot of files, and I'll go over the important ones.
  • kong/kong.yml

This file covers Kong features; only one plugin is used to get metrics for Prometheus; all metrics should be enabled to support every section of the Official Dashboard.

_format_version: "3.0"

plugins:
- name: prometheus
  config:
    status_code_metrics: true
    latency_metrics: true
    bandwidth_metrics: true
    upstream_health_metrics: true

services:
- name: hello
  url: http://local-server
  routes:
  - name: hello
    paths:
    - /

- name: whoami
  url: http://whoami
  routes:
  - name: whoami
    paths:
    - /whoami

- name: coins
  url: https://coins-api/v1/bpi/currentprice.json
  routes:
  - name: coins
    paths:
    - /coins

- name: money
  url: https://money-api/json/all
  routes:
  - name: money
    paths:
    - /money

upstreams:
  - name: coins-api
    targets:
    - target: api.coindesk.com:443

  - name: money-api
    targets:
    - target: economia.awesomeapi.com.br:443

  - name: local-server
    targets:
    - target: nginx:5000

  - name: whoami
    targets:
    - target: whoami:80
      weight: 50
    - target: whoami-2:80
      weight: 50
    - target: invalid-whoami-2:80
      weight: 1
    healthchecks:
      passive:
        healthy:
          http_statuses:
          - 200
          successes: 1
        type: http
        unhealthy:
          http_failures: 5
          http_statuses:
          - 429
          - 500
          - 503
          tcp_failures: 2
          timeouts: 2
  • docker-compose.yml

A number of containers will be running concurrently:

  • Kong
  • Prometheus
  • Grafana
  • Nginx (local server)
  • Traefik Whoami (used for upstream metric generation)
  • Seed (used to produce metrics by making queries)
version: '3.8'

x-healthcheck: &defautl-healthcheck
  interval: 10s
  timeout: 3s
  start_period: 1s

services:
  kong:
    image: kong:3.3.1-alpine
    environment:
      KONG_LOG_LEVEL: info
      KONG_ADMIN_ACCESS_LOG: /dev/stdout
      KONG_ADMIN_ERROR_LOG: /dev/stderr
      KONG_DATABASE: "off"
      KONG_DECLARATIVE_CONFIG: /etc/kong/kong.yml
      KONG_STATUS_LISTEN: "0.0.0.0:8100"
      KONG_ADMIN_LISTEN: "0.0.0.0:8001"
    healthcheck:
        <<: *defautl-healthcheck
        test: ["CMD-SHELL", "nc -z -v localhost 8000"]
    ports:
      - "8000:8000"
      - "8001:8001"
      - "8100:8100"
    restart: unless-stopped
    networks:
      - kong-grafana
    volumes:
      - ./kong/kong.yml:/etc/kong/kong.yml
    depends_on:
      - nginx
      - whoami
      - whoami-2

  prometheus:
    image: prom/prometheus
    ports:
      - 9090:9090
    healthcheck:
      <<: *defautl-healthcheck
      test: ["CMD-SHELL", "nc -z -v localhost 9090"]
    volumes: 
      - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
    networks:
      - kong-grafana
    depends_on:
      - kong

  grafana:
    image: grafana/grafana
    ports: 
      - 9091:9091
    healthcheck:
        <<: *defautl-healthcheck
        test: ["CMD-SHELL", "nc -z -v localhost 9091"]
    volumes: 
      - ./grafana/grafana.ini:/etc/grafana/grafana.ini
      - ./grafana/datasource.yaml:/etc/grafana/provisioning/datasources/datasource.yaml
      - grafana-storage:/var/lib/grafana
    networks:
      - kong-grafana
    depends_on:
      - prometheus

  nginx:
    build:
      context: ./
      dockerfile: nginx.Dockerfile
    image: nginx-local-server
    healthcheck:
      <<: *defautl-healthcheck
      test: ["CMD-SHELL", "nginx -t"]
    networks:
      - kong-grafana

  whoami:
    image: traefik/whoami
    environment:
      - WHOAMI_PORT_NUMBER=80
    networks:
      - kong-grafana

  whoami-2:
    image: traefik/whoami
    environment:
      - WHOAMI_PORT_NUMBER=80
    networks:
      - kong-grafana

  seed:
    build:
      context: ./
      dockerfile: seed.Dockerfile
    networks:
      - kong-grafana
    depends_on:
      - kong

volumes:
  grafana-storage:

networks:
  kong-grafana:
    name: "kong-grafana"
  • nginx.Dockerfile
A Docker NGINX image that listens on port 5000 and contains some assets.
FROM docker.io/bitnami/nginx:1.25

ENV NGINX_HTTP_PORT_NUMBER=5000

COPY ./assets/nginx /app
  • seed.Dockerfile

A Docker Alpine image was used to call API Gateway and generate certain metrics with Apache AB.

FROM alpine:latest

RUN apk add apache2-utils

COPY ./seed/test_apis.sh /test_apis.sh

RUN chmod +x /test_apis.sh

ENTRYPOINT [ "/test_apis.sh" ]
  • grafana/datasource.yml

This file describes Grafana's accessible datasources.

apiVersion: 1

datasources:
  - name: Prometheus
    type: prometheus
    url: http://prometheus:9090
  • prometheus/prometheus.yml

This file provides Prometheus targets for metric extraction and specifies a global interval for scraping data.

global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'kong'

    static_configs:
      - targets: ['kong:8100']

You can start containers after cloning the repository; however, ensure sure you are not using the ports specified in the docker-compose file.

docker compose up -d

There is a list of url that you can explorer:

The seed container will execute certain requests to generate data for Grafana; you can do more manually or relaunch the seed container with the following command:

docker compose run --rm seed --env REQUESTS=11

Prometheus metrics format

This is just part of the metrics prometheus format, which is a hash that has multiple keys and values per line; you may see more at http://localhost:8100/metrics.

# HELP kong_bandwidth_bytes Total bandwidth (ingress/egress) throughput in bytes
# TYPE kong_bandwidth_bytes counter
kong_bandwidth_bytes{service="coins",route="coins",direction="egress",consumer=""} 9410
kong_bandwidth_bytes{service="coins",route="coins",direction="ingress",consumer=""} 752
kong_bandwidth_by
tes{service="money",route="money",direction="egress",consumer=""} 20514
kong_bandwidth_bytes{service="money",route="money",direction="ingress",consumer=""} 1128
# HELP kong_datastore_reachable Datastore reachable from Kong, 0 is unreachable

Kong Dashboard

The Grafana dashboard can be accessed via http://localhost:9091/. using such strong default credentials (admin/admin). After signing in, browse to Dashboard > Manager > Import.

Then import a file from Kong's official dashboard, choose only one Prometheus DataSource, and you can now view the dashboard and metrics.

Stop making assumptions! Extract metrics from Kong API Gateway using Grafana and Prometheus

Alerts

To establish allerts, identify a specific metric and create values for ok, warn, and error, as we do in other tools such as New Relic, DataDog, CloudWatch, and so on. This configuration may be found at http://localhost:9091/alerting.

The image below shows an alert example in which the request count from the kong_http_requests_total metric is used to determine whether Kong is receiving requests.
Stop making assumptions! Extract metrics from Kong API Gateway using Grafana and Prometheus

It is normal to receive email alerts, thus to make this function, you must configure smtp settings in the grafana.ini file.

[smtp]
enabled = false
host = localhost:25
user =
# If the password contains # or ; you have to wrap it with triple quotes. Ex """#password;"""
password =
cert_file =
key_file =
skip_verify = false
from_address = [email protected]
from_name = Grafana
ehlo_identity =
startTLS_policy =

[emails]
welcome_email_on_sign_up = false
templates_pattern = emails/*.html

That's all

I hope you found this article useful; I attempted to provide a sandbox for you to begin exploring with metrics while avoiding making assumptions about performance, issues, and migrations.

This post does not cover how to create custom metrics using Prometheus power; rather, we used metrics offered by the Kong Plugin that were Prometheus-specified.

We have several applications and frameworks that generate metrics in Prometheus format, but if you want to produce business metrics, you'll have to get your hands dirty.
You may create a wonderful Dash in Grafana, but don't waste your time before looking at the official Dash.

Keep your dash and kernel 🧠 up to date. God's 🙏🏿 blessings on you.

References

]]>
<![CDATA[Movendo BOOT EFI para sua nova partição]]>https://willsena.dev/movendo-boot-efi-para-sua-nova-particao/64b5d912fbf9a60006c5cdd6Tue, 18 Jul 2023 12:53:28 GMT

Sério tu vais falar de Windows? Sim, porque não!

Embora eu seja um usuário ativo do Linux, diria que grande parte da minha experiência profissional foi no Windows, para os mais íntimos janelinha...

Movendo BOOT EFI para sua nova partição

Meu primeiro contato Windows foi a versão 3.11, meu pai iniciou a carreira de técnico de informática e acabei passando por todas as versões 95, 98, ME. NT, 2000, XP e Vista. Continuo usando o Windows no meu desktop até hoje desde a versão 7, agora estou no Windows 11, mas o penguim🐧 é primogênito em meu notebook.

O problema

Alguns tempo atrás, comprei um NVME M.2 Kingston e decidi instalar a nova versão do Windows 11. Porém, durante a instalação criei uma partição para uso e outra para recuperação, mas faltou a famosa partição de 100MB para EFI. O boot permanceu no SSD antigo. 😬

Este ano fiz um upgrade no PC com alguns tons de Gamer:

  • Ryzen 7 5800x, o bicho é 🔥;
  • Placa Mãe Asus TUF GAMING;
  • NVIDIA RTX 3050;
  • 32 GB de RAM;
  • Gabinete cheio de leds do jeito que o filhão gosta;
Apesar da atualização, o Windows parece estar engasgando devido à má otimização. Parem de copiar o KDE e melhorem o SO. 😤

Então eu decidi resolver este problema do boot. Já pensou se meu querido SSD decide morrer? Isso traria um trabalho de restauração EFI, que eu gostaria de evitar. Tô sem tempo irmão...

Movendo BOOT EFI para sua nova partição
Falando em SSD, meu disco rígido externo WD Elements (WD Blue) aclamado pela crítica morreu depois de quase não ser usado. A falácia de que "Western Digital é melhor que Seagate" não parece se aplicar a mim. Meu Seagate está idoso e lindão no meu PC que passou por vários upgrades. Ambos são utilizados de forma diferentes um interno e outro externo, mas fica registrado minha indignação.

Sem rodeios no artigo! Usei o Mini Tool Partition Wizard, uma ferramenta (gratuita) para Windows que pode redimensionar discos, a minha preferida. Parece ser o sucessor do conhecido Partition Magic. O próprio Gerenciamento de Disco do Windows pode fazer esse redimensionamento, você também pode usar o Disk Genius para esse propósito.

Essas ferramentas permitem que alguns de seus recursos sejam usados ​​gratuitamente, mas também existem recursos pagos "freemium".

Não quero instalar nada no Windows e uso Dual-Boot Linux! O Gparted também faz bem esse trabalho.

Pode ser necessário redimensionar uma ou mais partições em seu disco rígido para deixar 100 MB de espaço não reservado para a nova partição.

Criando a partição

Depois de ter esse espaço não reservado, você pode criar uma partição FAT32 e nomeá-la como EFI. Você deve atribuir à partição uma letra que será necessária para gravar os arquivos de inicialização.

A Microsoft facilitou nossa vida com o bcdboot. Se as partições já foram criadas, você pode facilmente criar toda a estrutura EFI para seu sistema atual com o seguinte comando: 

  • C:\Windows é o local onde seu sistema operacional foi instalado;
  • Z: é um exemplo você deve informar a letra da sua nova unidade EFI
bcdboot C:\Windows /s Z:

Movendo BOOT EFI para sua nova partição

Pronto

Depois de criar os arquivos de BOOT necessários, você pode definir uma nova partição do disco rígido como a partição de BOOT primária em sua BIOS. Você poderia deletar a antiga partição EFI depois, mas acho interessante manter os dois locais, vai que!

Deus abenç️️oe 🕊️ uma excelente semana para você! ️Bora manter esse kernel 🧠 atualizado.

Referências

]]>
<![CDATA[How to use Vanilla JavaScript to listen for and emit events]]>https://willsena.dev/how-to-use-vanilla-javascript-to-listen-for-and-emit-events/64a4aeca5dd8710006f4a687Wed, 05 Jul 2023 00:55:55 GMT

Today I'll teach you how to listen for and emit events using only JavaScript and the DOM. If you are a Web Developer, you may be aware that everything begins with events, such as moving, clicking, and keying, but how about developing your own events to perform your tasks?

How to use Vanilla JavaScript to listen for and emit events

Fire a click event

We can emit a click event at any place in the front-end context; let's create a basic page to demonstrate this:

<!DOCTYPE html>
<html lang="en">
  <head>
    <title>Fire a click event</title>
  </head>
  <body>
    <button id="button">Click to subscribe, please! 🙏 (<span>0</span>)</button>
    <script lang="javascript" async>
      const buttonElement = document.getElementById("button")
      let counter = 0

      buttonElement.addEventListener('click', function (event) {
          event.preventDefault()
          counter++
          const counterElement = this.getElementsByTagName("span")[0]
          counterElement.innerText = counter.toString()
      })

      setInterval(() => {
        buttonElement.dispatchEvent(new Event('click'))
      }, 5000)
    </script>
  </body>
</html>
Take a deep breath, Angular, React, and Vue devs, and remember that this is pure and portable Vanilla. Keep cool and relax, I know you like writing your effects, directives, and states. 🤗

In the following example, we have a basic website that forces a person to click against their will, similar to how some ads work 🤑.

  • addEventListener, adds a function to listen for element events;
  • dispatchEvent, send an event to an element; if the element is not listening, the event is omitted;

Following that is a screencast of how this example works:

How to use Vanilla JavaScript to listen for and emit events

Why not create your own events?

We may add an event to a web application, such as a logout event, redirect event, push event, or anything else important to your application or website. If you would like to combine different libraries, you may use DOM power events to accomplish so.

<!DOCTYPE html>
<html lang="en">
  <head>
    <title>Document events</title>
  </head>
  <body>
    <button id="button">Purge Session 🧹</button>
    <script lang="javascript" async>
      document.addEventListener('logout', function () {
        sessionStorage.clear()
        localStorage.clear()

        console.log('Your session has been deleted...')

        document.bgColor = 'tomato'

        setTimeout(() => {
          document.bgColor = '#fff'
        }, 2000)
      })

      const buttonElement = document.getElementById("button")

      buttonElement.addEventListener('click', function (event) {
        event.preventDefault()
        document.dispatchEvent(new Event('logout'))
      })
    </script>
  </body>
</html>

The first time we received an alert box, next messages were sent to the console since the alert function was no longer listening.

How to use Vanilla JavaScript to listen for and emit events

Removing an event

Using the following function, we can remove a registered event:

<!DOCTYPE html>
<html lang="en">
  <head>
    <title>RemoveEventListener in action</title>
  </head>
  <body>
    <button id="button">Click one time</button>
    <script lang="javascript" async>
      const buttonElement = document.getElementById('button')
      let counter = 0

      function handleClickAlert(event) {
        event.preventDefault()
        alert('click works!')
        buttonElement.removeEventListener('click', handleClickAlert, false)
      }

      function handleClickSilent(event) {
        event.preventDefault()

        counter++
        console.log(`click works ${counter}!`)
      }

      buttonElement.addEventListener('click', handleClickSilent)
      buttonElement.addEventListener('click', handleClickAlert)
    </script>
  </body>
</html>

Because we can add multiple listeners, we must specify which function should be removed. The first time we received an alert box, next messages were sent to the console since the alert function was no longer listening.

That's all

Previous examples is designed to works with the most recent browser; if you use Internet Explorer 6, something may not work properly. 🤭

Sometimes DOM Events are all we need to integrate things; multiple frameworks handle the work of listening for and emitting events, but it's crucial to understand how things function without the library and packages. That is my only objective in writing this post.

Keep your kernel 🧠 up to date. God's 🙏🏿 blessings on you.

]]>