Metadata-Version: 2.4
Name: opsiconfd
Version: 4.3.42.1
Summary: opsi configuration service
Author-email: uib GmbH <info@uib.de>
Maintainer-email: uib GmbH <info@uib.de>
License-Expression: AGPL-3.0-only
Project-URL: Homepage, https://www.opsi.org
Requires-Python: >=3.11
Description-Content-Type: text/markdown
Requires-Dist: acme>=3.0
Requires-Dist: aiofiles>=24.1
Requires-Dist: aiohttp>=3.7
Requires-Dist: aiologger>=0.7
Requires-Dist: aiozeroconf>=0.1
Requires-Dist: annotated-types>=0.7
Requires-Dist: click>=8.1
Requires-Dist: configargparse>=1.4
Requires-Dist: configupdater>=3.1
Requires-Dist: distro>=1.5
Requires-Dist: fastapi>=0.115
Requires-Dist: greenlet>=3.0
Requires-Dist: hiredis>=3.1
Requires-Dist: httpx>=0.28
Requires-Dist: itsdangerous>=2.0
Requires-Dist: mysqlclient<2.2,>=2.0
Requires-Dist: netifaces>=0.11
Requires-Dist: objgraph>=3.5
Requires-Dist: py3dns>=4.0
Requires-Dist: pydantic>=2.4
Requires-Dist: pydantic-core>=2.18
Requires-Dist: pympler>=1.0
Requires-Dist: pymysql>=1.1
Requires-Dist: pyotp>=2.8
Requires-Dist: python-magic>=0.4
Requires-Dist: python-multipart>=0.0
Requires-Dist: python-opsi<4.4,>=4.3.6
Requires-Dist: python-opsi-common<4.4,>=4.3.29
Requires-Dist: python-opsi-system<4.4,>=4.3.2.1
Requires-Dist: python3-saml==1.16.0
Requires-Dist: qrcode>=8.0
Requires-Dist: redis<5.3,>=5.0
Requires-Dist: rich>=13.0
Requires-Dist: six>=1.16
Requires-Dist: starlette>=0.46
Requires-Dist: uvicorn>=0.34
Requires-Dist: uvloop>=0.21
Requires-Dist: websockets>=15.0
Requires-Dist: werkzeug>=3.0
Requires-Dist: wsgidav>=4.3
Requires-Dist: wsproto>=1.2
Requires-Dist: xmlsec<=1.3.13
Requires-Dist: lxml
Requires-Dist: yappi>=1.4; platform_machine == "x86_64"

![pipeline](https://gitlab.uib.gmbh/uib/opsiconfd/badges/devel/pipeline.svg)
![coverage](https://gitlab.uib.gmbh/uib/opsiconfd/badges/devel/coverage.svg)
# Configuration

The configuration is based on [ConfigArgParse](https://pypi.org/project/ConfigArgParse/).
Configuration can be done by command line, config file, environment variable, and defaults.
If a value is specified in more than one way, the folowing order of precedence is applied:
command line argument > environment variable > config file value > default value

## Internal and external urls
In the communication between services (redis, grafana, opsiconfd, ...) the internal urls are used.
These can be different from the external urls of the services, for example when services are connected via a docker internal network or behind a proxy / load-balancer.

## workers and executor workers
JSON-RPC requests will be executed in a asyncio executor pool, because the opsi backend is not async currently.
Therefore, the maximum of concurrent JSON-RPC requests is limited by the number of workers and the size of the executor pool.
**max concurrent JSON-RPC-requests = workers * executor-workers**
If this limit is exceeded, new JSON-RPC requests will have to wait for a free worker.
Thus, long runinng JSON-RPC requests could block other requests.

# Development in Dev Container
* Install Remote-Containers: https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers
* Set OPSILICSRV_TOKEN in docker/opsiconfd-dev/.env if available
* Open project in container:
	* \<F1\> -> Remote-Containers: Reopen in Container
	* or remote button in bottom left corner -> Reopen in Container
* In the container \<F5\> starts opsiconfd in debug mode (opsiconfd default)
* You can use the default debug settings or you can set the number of worker and the log level by selecting opsiconfd in the debug/run tab.

## Run Tests
* Select "Run Tests" on the Status Bar, use the Test Explorer or run `uv run pytest --cov-append --cov opsiconfd --cov-report term --cov-report xml -vv tests` in a terminal


# Performance
## Redis
* Based on the Redis official benchmark, you can improve performance by upto 50% using unix sockets (versus TCP ports) on Redis.
* Check slow queries `SLOWLOG GET`
* Check queries `MONITOR`

## Memory usage / profiling
### py-spy
To analyze high CPU usage of opsiconfd processes py-spy can be very helpful.
```shell
py-spy top --full-filenames --pid <pid-of-opsiconfd-worker-or-manager>
```

### valgrind
```shell
PYTHONMALLOC=malloc sudo -E valgrind --tool=memcheck --trace-children=yes --dsymutil=yes --leak-check=full --show-leak-kinds=all --log-file=/tmp/valgrind.log uv run opsiconfd --workers=1 --log-level-stderr=5
```
* PYTHONMALLOC=debug
* PYTHONMALLOC=malloc_debug

# Segfaults and Core dumps

opsiconfd leverages Python's faulthandler module to output a backtrace to stderr, which is captured by systemd-journald.

Look for `Current thread xxxxxx (most recent call first)`.
Tracebacks starting with `Thread xxxxxx (most recent call first)` are from other running threads.

To obtain and analyze a coredump, follow these steps:

Install systemd-coredump
```shell
apt install systemd-coredump
```

Edit opsiconfd Unit-File
```shell
systemctl edit opsiconfd
```

Add:
```ini
[Service]
LimitCORE=infinity
```

Activate configuration change
```shell
systemctl daemon-reexec
systemctl restart opsiconfd
```

After a segfault run:
```shell
coredumpctl info opsiconfd
```

Analyze in gdb:
```shell
coredumpctl gdb

# or

gdb /usr/lib/opsiconfd/opsiconfd /var/lib/systemd/coredump/core.opsiconfd...

(gdb) bt
(gdb) info registers
(gdb) disassemble $pc-32, $pc+32
```

When Python's faulthandler is enabled, the backtrace will include additional entries after the actual crash, as faulthandler becomes active at that point. To identify the cause of the crash, examine the backtrace entries immediately preceding the faulthandler activation. For example:
```
#0  0x00007f868c46d9fc __pthread_kill_implementation (libc.so.6 + 0x969fc)
#1  0x00007f868c419476 __GI_raise (libc.so.6 + 0x42476)
#2  0x00007f868b22bb61 faulthandler_fatal_error (libpython3.13.so.1.0 + 0x5c4b61)
#3  0x00007f868c419520 __restore_rt (libc.so.6 + 0x42520)
#4  0x00007f868c5747fd __strlen_avx2 (libc.so.6 + 0x19d7fd)
#5  0x00007f868b11ce36 string_at (libpython3.13.so.1.0 + 0x4b5e36)    <<< segfault happend here
...
```

To simulate a segfault:
```shell
kill -s SIGSEGV <pid>
```

## Use Python with Debug Symbols
* Download a Python debug version from: https://github.com/astral-sh/python-build-standalone/releases/ (for example `cpython-3.13.5+20250702-x86_64-unknown-linux-gnu-debug-full.tar.zst`)
* extract: `tar xf cpython-3.13.5+20250702-x86_64-unknown-linux-gnu-debug-full.tar.zst`
* Build: `uv run --python ./python/install opsi-dev-cli pyinstaller build --skip-transifex --extra-args "--noupx"`
* Check for debug sections: `readelf -S dist/opsiconfd/_internal/libpython3.13.so | grep debug`

## Check libraries for debug sections
```shell
for so in $(find . -iname "*.so"); do
	readelf -S "$so" | grep debug >/dev/null && (
		echo -e "\033[0;32m[*] debug section found in $so\033[0m"
	) || (
		echo -e "\033[1;33m[\!] debug section missing in $so\033[0m"
	)
done
```

## valgrind
```shell
valgrind --log-file=/tmp/valgrind.log --trace-children=yes --track-origins=yes --leak-check=full opsiconfd -l6
```
