summaryrefslogtreecommitdiff
path: root/config/config.exs
AgeCommit message (Collapse)Author
2024-09-14LDAP: permit overriding the CA rootMark Felder
2024-09-04Limit the number of orphaned to delete at 100 every 10 mins due to the ↵Mark Felder
cascading queries that have to check oauth_authorizations and oauth_tokens tables. This should keep ahead of most app registration spam and not overwhelm lower powered servers.
2024-09-04Add Cron worker to clean up orphaned apps hourlyMark Felder
2024-09-04Rate Limit the OAuth App spamMark Felder
2024-08-07Merge branch 'remove/workerhelper' into 'develop'feld
Remove WorkerHelper See merge request pleroma/pleroma!4166
2024-07-30Remove WorkerHelperMark Felder
2024-07-30Merge branch 'hackney-pool-timeout' into 'develop'feld
Align Hackney and Gun connection pool timeouts See merge request pleroma/pleroma!4197
2024-07-30Align Hackney and Gun connection pool timeoutsMark Felder
2024-07-30Increase federator outgoing job parallelismMark Felder
2024-07-30Remove unused Oban queueMark Felder
2024-07-25Merge remote-tracking branch 'origin/develop' into oban/backupMark Felder
2024-07-24Increase Oban.Pruner max_age to 15 minsMark Felder
2024-07-24Pad RichMediaWorker timeout to be 2s longer than the Rich Media HTTP timeoutMark Felder
2024-07-23Make backup timeout configurableMark Felder
2024-07-15Increase slow job queue parallelizationMark Felder
2024-07-15Increase background job concurrency to 20Mark Felder
2024-07-12Remove the unused ingestion queueMark Felder
2024-06-12Fix compatibility with Loggers in Elixir 1.15+Haelwenn (lanodan) Monnier
2024-05-30IPFS uploader: dialyzer fixesMark Felder
lib/pleroma/uploaders/ipfs.ex:43:no_return Function put_file/1 has no local return. ________________________________________________________________________________ lib/pleroma/uploaders/ipfs.ex:49:call The function call will not succeed. Pleroma.HTTP.post( binary(), _mp :: %Tesla.Multipart{ :boundary => binary(), :content_type_params => [binary()], :parts => [ %Tesla.Multipart.Part{ :body => binary(), :dispositions => [any()], :headers => [any()] }, ... ] }, [], [{:params, [{:"cid-version", <<49>>}]}] ) will never return since the success typing is: (binary(), binary(), [{binary(), binary()}], Keyword.t()) :: {:error, _} | {:ok, %Tesla.Env{ :__client__ => %Tesla.Client{ :adapter => nil | {_, _} | {_, _, _}, :fun => _, :post => [any()], :pre => [any()] }, :__module__ => atom(), :body => _, :headers => [{_, _}], :method => :delete | :get | :head | :options | :patch | :post | :put | :trace, :opts => [{_, _}], :query => [{_, _}], :status => nil | integer(), :url => binary() }} and the contract is (Pleroma.HTTP.Request.url(), String.t(), Pleroma.HTTP.Request.headers(), :elixir.keyword()) :: {:ok, Tesla.Env.t()} | {:error, any()}
2024-05-28Merge branch 'develop' of git.pleroma.social:pleroma/pleroma into ↵Lain Soykaf
pleroma-secure-mode
2024-05-28Merge branch 'httpfixes' into 'develop'lain
Some HTTP and connection pool improvements See merge request pleroma/pleroma!4124
2024-05-27Merge branch 'simpler-oban-queues' into 'develop'feld
Oban queue simplification See merge request pleroma/pleroma!4123
2024-05-27Merge branch 'explicitly-allow-unsafe-2' into 'develop'lain
Explicitly allow unsafe 2 See merge request pleroma/pleroma!4125
2024-05-27Merge branch 'qdrant-search-2' into 'develop'lain
Search: Basic Qdrant/Ollama search See merge request pleroma/pleroma!4109
2024-05-27RichMedia use of ConcurrentLimiter was removed in the refactorMark Felder
2024-05-27Remove MediaProxyWarmingPolicy config for ConcurrentLimiter as we are not ↵Mark Felder
using it
2024-05-27Merge branch 'logger-metadata' into 'develop'feld
Logger metadata See merge request pleroma/pleroma!3990
2024-05-27Oban queue simplificationMark Felder
2024-05-27HttpSecurityPlug: Don't allow unsafe-eval by defaultLain Soykaf
2024-05-27Merge branch 'anti-mention-spam-mrf' into 'develop'feld
Anti-mention Spam MRF See merge request pleroma/pleroma!4072
2024-05-27Make user age limit configurableMark Felder
Switch to milliseconds for consistency with other configuration options in codebase
2024-05-27DNSRBL in an MRFMark Felder
2024-05-27Merge branch 'nsfw-api-mrf' into 'develop'lain
NSFW API Policy See merge request pleroma/pleroma!3471
2024-05-27Rework Gun connection pool sizes to make better use of the default 250 ↵Mark Felder
connections
2024-05-27Add a dedicated connection pool for Rich MediaMark Felder
Sharing this pool with regular Media is problematic as Rich Media will connect to many different domains and thrash the pool, but regular Media will have predictable connections to the webservers hosting media for the fediverse servers you peer with.
2024-05-27Merge branch 'develop' of git.pleroma.social:pleroma/pleroma into nsfw-api-mrfLain Soykaf
2024-05-27Merge branch 'develop' of git.pleroma.social:pleroma/pleroma into ↵Lain Soykaf
pleroma-ipfs_uploader
2024-05-27QdrantSearch: Add health checks.Lain Soykaf
2024-05-27Merge branch 'develop' of git.pleroma.social:pleroma/pleroma into ↵Lain Soykaf
qdrant-search-2
2024-05-25Search backend healthcheck processMark Felder
2024-05-19B Config: Set default Qdrant embedder to our fastembed-api serverLain Soykaf
2024-05-19B QdrantSearch: Switch to OpenAI apiLain Soykaf
2024-05-14Search: Basic Qdrant/Ollama searchLain Soykaf
2024-05-07Respect the TTL returned in OpenGraph tagsMark Felder
2024-05-07Increase the :max_body for Rich Media to 5MBMark Felder
Websites are increasingly getting more bloated with tricks like inlining content (e.g., CNN.com) which puts pages at or above 5MB. This value may still be too low.
2024-05-07RichMedia refactorMark Felder
Rich Media parsing was previously handled on-demand with a 2 second HTTP request timeout and retained only in Cachex. Every time a Pleroma instance is restarted it will have to request and parse the data for each status with a URL detected. When fetching a batch of statuses they were processed in parallel to attempt to keep the maximum latency at 2 seconds, but often resulted in a timeline appearing to hang during loading due to a URL that could not be successfully reached. URLs which had images links that expire (Amazon AWS) were parsed and inserted with a TTL to ensure the image link would not break. Rich Media data is now cached in the database and fetched asynchronously. Cachex is used as a read-through cache. When the data becomes available we stream an update to the clients. If the result is returned quickly the experience is almost seamless. Activities were already processed for their Rich Media data during ingestion to warm the cache, so users should not normally encounter the asynchronous loading of the Rich Media data. Implementation notes: - The async worker is a Task with a globally unique process name to prevent duplicate processing of the same URL - The Task will attempt to fetch the data 3 times with increasing sleep time between attempts - The HTTP request obeys the default HTTP request timeout value instead of 2 seconds - URLs that cannot be successfully parsed due to an unexpected error receives a negative cache entry for 15 minutes - URLs that fail with an expected error will receive a negative cache with no TTL - Activities that have no detected URLs insert a nil value in the Cachex :scrubber_cache so we do not repeat parsing the object content with Floki every time the activity is rendered - Expiring image URLs are handled with an Oban job - There is no automatic cleanup of the Rich Media data in the database, but it is safe to delete at any time - The post draft/preview feature makes the URL processing synchronous so the rendered post preview will have an accurate rendering Overall performance of timelines and creating new posts which contain URLs is greatly improved.
2024-03-19logger: remove request_id metadata which is not usefulMark Felder
2024-03-19Logger metadata for request path and authenticated userMark Felder
2024-03-19Logger metadata for inbound federation requestsMark Felder
2024-03-18Update minimum Postgres version to 11.0; disable JITMark Felder
This release is where JIT was introduced and it should be disabled. Pleroma's queries do not benefit from JIT, but it can increase latency of queries.