diff options
Diffstat (limited to 'docs/configuration')
-rw-r--r-- | docs/configuration/cheatsheet.md | 15 | ||||
-rw-r--r-- | docs/configuration/howto_ejabberd.md | 136 | ||||
-rw-r--r-- | docs/configuration/optimizing_beam.md | 66 |
3 files changed, 217 insertions, 0 deletions
diff --git a/docs/configuration/cheatsheet.md b/docs/configuration/cheatsheet.md index 0b13d7e88..ebf95ebc9 100644 --- a/docs/configuration/cheatsheet.md +++ b/docs/configuration/cheatsheet.md @@ -45,6 +45,7 @@ To add configuration to your config file, you can copy it from the base config. older software for theses nicknames. * `max_pinned_statuses`: The maximum number of pinned statuses. `0` will disable the feature. * `autofollowed_nicknames`: Set to nicknames of (local) users that every new user should automatically follow. +* `autofollowing_nicknames`: Set to nicknames of (local) users that automatically follows every newly registered user. * `attachment_links`: Set to true to enable automatically adding attachment link text to statuses. * `max_report_comment_size`: The maximum size of the report comment (Default: `1000`). * `safe_dm_mentions`: If set to true, only mentions at the beginning of a post will be used to address people in direct messages. This is to prevent accidental mentioning of people when talking about them (e.g. "@friend hey i really don't like @enemy"). Default: `false`. @@ -1077,6 +1078,20 @@ Control favicons for instances. * `enabled`: Allow/disallow displaying and getting instances favicons +## Pleroma.User.Backup + +!!! note + Requires enabled email + +* `:purge_after_days` an integer, remove backup achives after N days. +* `:limit_days` an integer, limit user to export not more often than once per N days. +* `:dir` a string with a path to backup temporary directory or `nil` to let Pleroma choose temporary directory in the following order: + 1. the directory named by the TMPDIR environment variable + 2. the directory named by the TEMP environment variable + 3. the directory named by the TMP environment variable + 4. C:\TMP on Windows or /tmp on Unix-like operating systems + 5. as a last resort, the current working directory + ## Frontend management Frontends in Pleroma are swappable - you can specify which one to use here. diff --git a/docs/configuration/howto_ejabberd.md b/docs/configuration/howto_ejabberd.md new file mode 100644 index 000000000..520a0acbc --- /dev/null +++ b/docs/configuration/howto_ejabberd.md @@ -0,0 +1,136 @@ +# Configuring Ejabberd (XMPP Server) to use Pleroma for authentication + +If you want to give your Pleroma users an XMPP (chat) account, you can configure [Ejabberd](https://github.com/processone/ejabberd) to use your Pleroma server for user authentication, automatically giving every local user an XMPP account. + +In general, you just have to follow the configuration described at [https://docs.ejabberd.im/admin/configuration/authentication/#external-script](https://docs.ejabberd.im/admin/configuration/authentication/#external-script). Please read this section carefully. + +Copy the script below to suitable path on your system and set owner and permissions. Also do not forget adjusting `PLEROMA_HOST` and `PLEROMA_PORT`, if necessary. + +```bash +cp pleroma_ejabberd_auth.py /etc/ejabberd/pleroma_ejabberd_auth.py +chown ejabberd /etc/ejabberd/pleroma_ejabberd_auth.py +chmod 700 /etc/ejabberd/pleroma_ejabberd_auth.py +``` + +Set external auth params in ejabberd.yaml file: + +```bash +auth_method: [external] +extauth_program: "python3 /etc/ejabberd/pleroma_ejabberd_auth.py" +extauth_instances: 3 +auth_use_cache: false +``` + +Restart / reload your ejabberd service. + +After restarting your Ejabberd server, your users should now be able to connect with their Pleroma credentials. + + +```python +import sys +import struct +import http.client +from base64 import b64encode +import logging + + +PLEROMA_HOST = "127.0.0.1" +PLEROMA_PORT = "4000" +AUTH_ENDPOINT = "/api/v1/accounts/verify_credentials" +USER_ENDPOINT = "/api/v1/accounts" +LOGFILE = "/var/log/ejabberd/pleroma_auth.log" + +logging.basicConfig(filename=LOGFILE, level=logging.INFO) + + +# Pleroma functions +def create_connection(): + return http.client.HTTPConnection(PLEROMA_HOST, PLEROMA_PORT) + + +def verify_credentials(user: str, password: str) -> bool: + user_pass_b64 = b64encode("{}:{}".format( + user, password).encode('utf-8')).decode("ascii") + params = {} + headers = { + "Authorization": "Basic {}".format(user_pass_b64) + } + + try: + conn = create_connection() + conn.request("GET", AUTH_ENDPOINT, params, headers) + + response = conn.getresponse() + if response.status == 200: + return True + + return False + except Exception as e: + logging.info("Can not connect: %s", str(e)) + return False + + +def does_user_exist(user: str) -> bool: + conn = create_connection() + conn.request("GET", "{}/{}".format(USER_ENDPOINT, user)) + + response = conn.getresponse() + if response.status == 200: + return True + + return False + + +def auth(username: str, server: str, password: str) -> bool: + return verify_credentials(username, password) + + +def isuser(username, server): + return does_user_exist(username) + + +def read(): + (pkt_size,) = struct.unpack('>H', bytes(sys.stdin.read(2), encoding='utf8')) + pkt = sys.stdin.read(pkt_size) + cmd = pkt.split(':')[0] + if cmd == 'auth': + username, server, password = pkt.split(':', 3)[1:] + write(auth(username, server, password)) + elif cmd == 'isuser': + username, server = pkt.split(':', 2)[1:] + write(isuser(username, server)) + elif cmd == 'setpass': + # u, s, p = pkt.split(':', 3)[1:] + write(False) + elif cmd == 'tryregister': + # u, s, p = pkt.split(':', 3)[1:] + write(False) + elif cmd == 'removeuser': + # u, s = pkt.split(':', 2)[1:] + write(False) + elif cmd == 'removeuser3': + # u, s, p = pkt.split(':', 3)[1:] + write(False) + else: + write(False) + + +def write(result): + if result: + sys.stdout.write('\x00\x02\x00\x01') + else: + sys.stdout.write('\x00\x02\x00\x00') + sys.stdout.flush() + + +if __name__ == "__main__": + logging.info("Starting pleroma ejabberd auth daemon...") + while True: + try: + read() + except Exception as e: + logging.info( + "Error while processing data from ejabberd %s", str(e)) + pass + +```
\ No newline at end of file diff --git a/docs/configuration/optimizing_beam.md b/docs/configuration/optimizing_beam.md new file mode 100644 index 000000000..e336bd36c --- /dev/null +++ b/docs/configuration/optimizing_beam.md @@ -0,0 +1,66 @@ +# Optimizing the BEAM + +Pleroma is built upon the Erlang/OTP VM known as BEAM. The BEAM VM is highly optimized for latency, but this has drawbacks in environments without dedicated hardware. One of the tricks used by the BEAM VM is [busy waiting](https://en.wikipedia.org/wiki/Busy_waiting). This allows the application to pretend to be busy working so the OS kernel does not pause the application process and switch to another process waiting for the CPU to execute its workload. It does this by spinning for a period of time which inflates the apparent CPU usage of the application so it is immediately ready to execute another task. This can be observed with utilities like **top(1)** which will show consistently high CPU usage for the process. Switching between procesess is a rather expensive operation and also clears CPU caches further affecting latency and performance. The goal of busy waiting is to avoid this penalty. + +This strategy is very successful in making a performant and responsive application, but is not desirable on Virtual Machines or hardware with few CPU cores. Pleroma instances are often deployed on the same server as the required PostgreSQL database which can lead to situations where the Pleroma application is holding the CPU in a busy-wait loop and as a result the database cannot process requests in a timely manner. The fewer CPUs available, the more this problem is exacerbated. The latency is further amplified by the OS being installed on a Virtual Machine as the Hypervisor uses CPU time-slicing to pause the entire OS and switch between other tasks. + +More adventurous admins can be creative with CPU affinity (e.g., *taskset* for Linux and *cpuset* on FreeBSD) to pin processes to specific CPUs and eliminate much of this contention. The most important advice is to run as few processes as possible on your server to achieve the best performance. Even idle background processes can occasionally create [software interrupts](https://en.wikipedia.org/wiki/Interrupt) and take attention away from the executing process creating latency spikes and invalidation of the CPU caches as they must be cleared when switching between processes for security. + +Please only change these settings if you are experiencing issues or really know what you are doing. In general, there's no need to change these settings. + +## VPS Provider Recommendations + +### Good + +* Hetzner Cloud + +### Bad + +* AWS (known to use burst scheduling) + + +## Example configurations + +Tuning the BEAM requires you provide a config file normally called [vm.args](http://erlang.org/doc/man/erl.html#emulator-flags). If you are using systemd to manage the service you can modify the unit file as such: + +`ExecStart=/usr/bin/elixir --erl '-args_file /opt/pleroma/config/vm.args' -S /usr/bin/mix phx.server` + +Check your OS documentation to adopt a similar strategy on other platforms. + +### Virtual Machine and/or few CPU cores + +Disable the busy-waiting. This should generally only be done if you're on a platform that does burst scheduling, like AWS. + +**vm.args:** + +``` ++sbwt none ++sbwtdcpu none ++sbwtdio none +``` + +### Dedicated Hardware + +Enable more busy waiting, increase the internal maximum limit of BEAM processes and ports. You can use this if you run on dedicated hardware, but it is not necessary. + +**vm.args:** + +``` ++P 16777216 ++Q 16777216 ++K true ++A 128 ++sbt db ++sbwt very_long ++swt very_low ++sub true ++Mulmbcs 32767 ++Mumbcgs 1 ++Musmbcs 2047 +``` + +## Additional Reading + +* [WhatsApp: Scaling to Millions of Simultaneous Connections](https://www.erlang-factory.com/upload/presentations/558/efsf2012-whatsapp-scaling.pdf) +* [Preemptive Scheduling and Spinlocks](https://www.uio.no/studier/emner/matnat/ifi/nedlagte-emner/INF3150/h03/annet/slides/preemptive.pdf) +* [The Curious Case of BEAM CPU Usage](https://stressgrid.com/blog/beam_cpu_usage/) |