{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":769327042,"defaultBranch":"master","name":"VNSleuth","ownerLogin":"Nekotekina","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2024-03-08T20:17:51.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/6028184?v=4","public":true,"private":false,"isOrgOwned":false},"refInfo":{"name":"","listCacheKey":"v0:1709929071.0","currentOid":""},"activityList":{"items":[{"before":"d14be65f169a155951f1085a804fa6c7037a8d30","after":"d7efe00fdfe45b192214c7978f875875dfaa37b2","ref":"refs/heads/master","pushedAt":"2024-09-19T14:34:55.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"New concept: \"infinite\" KV cache (beware of high storage requirements)\n\nThis is a very experimental implementation of context extension trick.\nA fraction of context tokens is dedicated for \"remembering\" old things.\nThings that would otherwise slip out of context window and forgotten.\nFor determining relevancy, a pretty dumb algorithm is currently used.\nIt's a bit of bruteforce and chance-reliant rather than precise.\nI hope to come up with some better ideas for relevancy extraction.\n\nRecommended: 500 GB of fast SSD space and 20-30 GB of free RAM for caches.\nRequirements may be halved in future, and depend on the LLM used.","shortMessageHtmlLink":"New concept: \"infinite\" KV cache (beware of high storage requirements)"}},{"before":"473fd1c55a6cf68c0c341fba9571c2055b04de2f","after":"d14be65f169a155951f1085a804fa6c7037a8d30","ref":"refs/heads/master","pushedAt":"2024-09-17T07:51:57.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"New concept: \"infinite\" KV cache (beware of high storage requirements)\n\nThis is a very experimental implementation of context extension trick.\nA fraction of context tokens is dedicated for \"remembering\" old things.\nThings that would otherwise slip out of context window and forgotten.\nFor determining relevancy, a pretty dumb algorithm is currently used.\nIt's a bit of bruteforce and chance-reliant rather than precise.\nI hope to come up with some better ideas for relevancy extraction.\n\nRecommended: 500 GB of fast SSD space and 20-30 GB of free RAM for caches.\nRequirements may be halved in future, and depend on the LLM used.","shortMessageHtmlLink":"New concept: \"infinite\" KV cache (beware of high storage requirements)"}},{"before":"83c37df9b36a5a7c5a35b37fbefec6770ccbbbd7","after":"473fd1c55a6cf68c0c341fba9571c2055b04de2f","ref":"refs/heads/master","pushedAt":"2024-09-17T07:41:46.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"New concept: \"infinite\" KV cache (beware of high storage requirements)\n\nThis is a very experimental implementation of context extension trick.\nA fraction of context tokens is dedicated for \"remembering\" old things.\nThings that would otherwise slip out of context window and forgotten.\nFor determining relevancy, a pretty dumb algorithm is currently used.\nIt's a bit of bruteforce and chance-reliant rather than precise.\nI hope to come up with some better ideas for relevancy extraction.\n\nRecommended: 500 GB of fast SSD space and 20-30 GB of free RAM for caches.\nRequirements may be halved in future, and depend on the LLM used.","shortMessageHtmlLink":"New concept: \"infinite\" KV cache (beware of high storage requirements)"}},{"before":"1c52d872e65bf68cedc4643f13a6dc2513aed13e","after":"83c37df9b36a5a7c5a35b37fbefec6770ccbbbd7","ref":"refs/heads/master","pushedAt":"2024-09-16T18:10:01.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"New concept: \"infinite\" KV cache (beware of high storage requirements)\n\nThis is a very experimental implementation of context extension trick.\nA fraction of context tokens is dedicated for \"remembering\" old things.\nThings that would otherwise slip out of context window and forgotten.\nFor determining relevancy, a pretty dumb algorithm is currently used.\nIt's a bit of bruteforce and chance-reliant rather than precise.\nI hope to come up with some better ideas for relevancy extraction.\n\nRecommended: 500 GB of fast SSD space and 20-30 GB of free RAM for caches.\nRequirements may be halved in future, and depend on the LLM used.","shortMessageHtmlLink":"New concept: \"infinite\" KV cache (beware of high storage requirements)"}},{"before":"66febbd83127fa680ff15890cb178b49e53de00e","after":"1c52d872e65bf68cedc4643f13a6dc2513aed13e","ref":"refs/heads/master","pushedAt":"2024-09-16T17:01:58.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"New concept: \"infinite\" KV cache (beware of high storage requirements)\n\nThis is a very experimental implementation of context extension trick.\nA fraction of context tokens is dedicated for \"remembering\" old things.\nThings that would otherwise slip out of context window and forgotten.\nFor determining relevancy, a pretty dumb algorithm is currently used.\nIt's a bit of bruteforce and chance-reliant rather than precise.\nI hope to come up with some better ideas for relevancy extraction.\n\nRecommended: 500 GB of fast SSD space and 20-30 GB of free RAM for caches.\nRequirements may be halved in future, and depend on the LLM used.","shortMessageHtmlLink":"New concept: \"infinite\" KV cache (beware of high storage requirements)"}},{"before":"cf3c48a5cdb910841d22dcf417629c2508628fde","after":"66febbd83127fa680ff15890cb178b49e53de00e","ref":"refs/heads/master","pushedAt":"2024-09-16T10:06:41.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"New concept: \"infinite\" KV cache (beware of high storage requirements)\n\nThis is a very experimental implementation of context extension trick.\nA fraction of context tokens is dedicated for \"remembering\" old things.\nThings that would otherwise slip out of context window and forgotten.\nFor determining relevancy, a pretty dumb algorithm is currently used.\nIt's a bit of bruteforce and chance-reliant rather than precise.\nI hope to come up with some better ideas for relevancy extraction.\n\nRecommended: 500 GB of fast SSD space and 20-30 GB of free RAM for caches.\nRequirements may be halved in future, and depend on the LLM used.","shortMessageHtmlLink":"New concept: \"infinite\" KV cache (beware of high storage requirements)"}},{"before":"172cba4f117038b92aeec19aa348508aacdf895a","after":"cf3c48a5cdb910841d22dcf417629c2508628fde","ref":"refs/heads/master","pushedAt":"2024-09-16T08:54:54.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"New concept: \"infinite\" KV cache (beware of high storage requirements)\n\nThis is a very experimental implementation of context extension trick.\nA fraction of context tokens is dedicated for \"remembering\" old things.\nThings that would otherwise slip out of context window and forgotten.\nFor determining relevancy, a pretty dumb algorithm is currently used.\nIt's a bit of bruteforce and chance-reliant rather than precise.\nI hope to come up with some better ideas for relevancy extraction.\n\nRecommended: 500 GB of fast SSD space and 20-30 GB of free RAM for caches.\nRequirements may be halved in future, and depend on the LLM used.","shortMessageHtmlLink":"New concept: \"infinite\" KV cache (beware of high storage requirements)"}},{"before":"cc9405c78add3b8bfc5df78117ab1186ae6cd413","after":"172cba4f117038b92aeec19aa348508aacdf895a","ref":"refs/heads/master","pushedAt":"2024-09-16T03:57:43.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"New concept: \"infinite\" KV cache (beware of high storage requirements)\n\nThis is a very experimental implementation of context extension trick.\nA fraction of context tokens is dedicated for \"remembering\" old things.\nThings that would otherwise slip out of context window and forgotten.\nFor determining relevancy, a pretty dumb algorithm is currently used.\nIt's a bit of bruteforce and chance-reliant rather than precise.\nI hope to come up with some better ideas for relevancy extraction.\n\nRecommended: 500 GB of fast SSD space and 20-30 GB of free RAM for caches.\nRequirements may be halved in future, and depend on the LLM used.","shortMessageHtmlLink":"New concept: \"infinite\" KV cache (beware of high storage requirements)"}},{"before":"d660bd783f504c9552b1b3445fb1c882f4b4c564","after":"cc9405c78add3b8bfc5df78117ab1186ae6cd413","ref":"refs/heads/master","pushedAt":"2024-09-16T03:20:17.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"New concept: \"infinite\" KV cache (beware of high storage requirements)\n\nThis is a very experimental implementation of context extension trick.\nA fraction of context tokens is dedicated for \"remembering\" old things.\nThings that would otherwise slip out of context window and forgotten.\nFor determining relevancy, a pretty dumb algorithm is currently used.\nIt's a bit of bruteforce and chance-reliant rather than precise.\nI hope to come up with some better ideas for relevancy extraction.\n\nRecommended: 500 GB of fast SSD space and 20-30 GB of free RAM for caches.\nRequirements may be halved in future, and depend on the LLM used.","shortMessageHtmlLink":"New concept: \"infinite\" KV cache (beware of high storage requirements)"}},{"before":"9b4db33d202cab7fe686c8a23cf57b1f4b888d6d","after":"d660bd783f504c9552b1b3445fb1c882f4b4c564","ref":"refs/heads/master","pushedAt":"2024-09-15T10:22:12.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"New concept: \"infinite\" KV cache (beware of high storage requirements)\n\nThis is a very experimental implementation of context extension trick.\nA fraction of context tokens is dedicated for \"remembering\" old things.\nThings that would otherwise slip out of context window and forgotten.\nFor determining relevancy, a pretty dumb algorithm is currently used.\nIt's a bit of bruteforce and chance-reliant rather than precise.\nI hope to come up with some better ideas for relevancy extraction.\n\nRecommended: 500 GB of fast SSD space and 20-30 GB of free RAM for caches.\nRequirements may be halved in future, and depend on the LLM used.","shortMessageHtmlLink":"New concept: \"infinite\" KV cache (beware of high storage requirements)"}},{"before":"8079e957b5e575296bacf6ea64e0fc1106d133c6","after":"9b4db33d202cab7fe686c8a23cf57b1f4b888d6d","ref":"refs/heads/master","pushedAt":"2024-09-14T17:59:38.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"New concept: \"infinite\" KV cache (beware of high storage requirements)\n\nThis is a very experimental implementation of context extension trick.\nA fraction of context tokens is dedicated for \"remembering\" old things.\nThings that would otherwise slip out of context window and forgotten.\nFor determining relevancy, a pretty dumb algorithm is currently used.\nIt's a bit of bruteforce and chance-reliant rather than precise.\nI hope to come up with some better ideas for relevancy extraction.\n\nRecommended: 500 GB of fast SSD space and 20-30 GB of free RAM for caches.\nRequirements may be halved in future, and depend on the LLM used.","shortMessageHtmlLink":"New concept: \"infinite\" KV cache (beware of high storage requirements)"}},{"before":"b17e81f320af184e38ac8883a7d1b83fb3a9662e","after":"8079e957b5e575296bacf6ea64e0fc1106d133c6","ref":"refs/heads/master","pushedAt":"2024-09-14T06:12:25.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"Update llama.cpp (better FA support)","shortMessageHtmlLink":"Update llama.cpp (better FA support)"}},{"before":"bd77d72de4d4e39713c57026979e7c74eb8338a0","after":"b17e81f320af184e38ac8883a7d1b83fb3a9662e","ref":"refs/heads/master","pushedAt":"2024-09-14T06:10:10.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"Update llama.cpp (better FA support)","shortMessageHtmlLink":"Update llama.cpp (better FA support)"}},{"before":"236c85eb4d21a57aa2d333be455debbb3f7eeaee","after":"bd77d72de4d4e39713c57026979e7c74eb8338a0","ref":"refs/heads/master","pushedAt":"2024-09-14T05:58:21.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"Update llama.cpp (better FA support)","shortMessageHtmlLink":"Update llama.cpp (better FA support)"}},{"before":"ab38f82520aae22aab64adbabe9af07b495c0e1e","after":"236c85eb4d21a57aa2d333be455debbb3f7eeaee","ref":"refs/heads/master","pushedAt":"2024-09-02T04:30:53.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"Parser improvements: read_le with custom copy function\n\nPreparations for ExHIBIT script support.\nAttempt to crack ExHIBIT encryption automatically.","shortMessageHtmlLink":"Parser improvements: read_le with custom copy function"}},{"before":"de3ce99d2005bc7549abf1de43f4bfd01ac21c7c","after":"ab38f82520aae22aab64adbabe9af07b495c0e1e","ref":"refs/heads/master","pushedAt":"2024-09-01T20:36:56.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"Parser improvements: read_le with custom copy function\n\nPreparations for ExHIBIT script support.\nAttempt to crack ExHIBIT encryption automatically.","shortMessageHtmlLink":"Parser improvements: read_le with custom copy function"}},{"before":"d132ba463766f0c196a69e25412af52abfe96e39","after":"de3ce99d2005bc7549abf1de43f4bfd01ac21c7c","ref":"refs/heads/master","pushedAt":"2024-09-01T20:32:04.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"Parser improvements: read_le with custom copy function\n\nPreparations for ExHIBIT script support.\nAttempt to crack ExHIBIT encryption automatically.","shortMessageHtmlLink":"Parser improvements: read_le with custom copy function"}},{"before":"d89f56d13fdc853bcaf3530c4efec2962f4ca1cf","after":"d132ba463766f0c196a69e25412af52abfe96e39","ref":"refs/heads/master","pushedAt":"2024-09-01T20:27:50.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"Add support for headerless Ethornell script format\n\nAlso improve furigana dumping","shortMessageHtmlLink":"Add support for headerless Ethornell script format"}},{"before":"dfc502edc6fad1e5e8f0a090917187c495a1eaff","after":"d89f56d13fdc853bcaf3530c4efec2962f4ca1cf","ref":"refs/heads/master","pushedAt":"2024-09-01T20:26:12.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"Add support for headerless Ethornell script format\n\nAlso improve furigana dumping","shortMessageHtmlLink":"Add support for headerless Ethornell script format"}},{"before":"183c9385f590466b85a42778ce73413eace0087f","after":"dfc502edc6fad1e5e8f0a090917187c495a1eaff","ref":"refs/heads/master","pushedAt":"2024-08-31T16:09:24.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"Fixups\n\nFix raw_discards insane values","shortMessageHtmlLink":"Fixups"}},{"before":"81d1e12d2abf0e2d42c87ad4cab8132526ccdc3f","after":"183c9385f590466b85a42778ce73413eace0087f","ref":"refs/heads/master","pushedAt":"2024-08-31T10:32:05.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"Fixups\n\nFix raw_discards insane values","shortMessageHtmlLink":"Fixups"}},{"before":"62695407877ad901a84574208634d4845f8d1ecb","after":"81d1e12d2abf0e2d42c87ad4cab8132526ccdc3f","ref":"refs/heads/master","pushedAt":"2024-08-30T22:33:32.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"Add support for headerless Ethornell script format\n\nAlso improve furigana dumping","shortMessageHtmlLink":"Add support for headerless Ethornell script format"}},{"before":"7b450f3e0f6be1ee349bc4f12345b35c1c584608","after":"62695407877ad901a84574208634d4845f8d1ecb","ref":"refs/heads/master","pushedAt":"2024-08-30T15:27:47.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"Add support for headerless Ethornell script format\n\nAlso improve furigana dumping","shortMessageHtmlLink":"Add support for headerless Ethornell script format"}},{"before":"d5f1d73a5475ef822f707fddb9879ccec72e6709","after":"7b450f3e0f6be1ee349bc4f12345b35c1c584608","ref":"refs/heads/master","pushedAt":"2024-08-30T10:47:16.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"Add support for headerless Ethornell script format\n\nAlso improve furigana dumping","shortMessageHtmlLink":"Add support for headerless Ethornell script format"}},{"before":"48c100d9f6df1e5f19d8a5370daa7229c2b8c88f","after":"d5f1d73a5475ef822f707fddb9879ccec72e6709","ref":"refs/heads/master","pushedAt":"2024-08-30T09:17:20.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"Add support for headerless Ethornell script format\n\nAlso improve furigana dumping","shortMessageHtmlLink":"Add support for headerless Ethornell script format"}},{"before":"1b4ecf735c7c97f769967270215dc6228724c3e0","after":"48c100d9f6df1e5f19d8a5370daa7229c2b8c88f","ref":"refs/heads/master","pushedAt":"2024-08-29T19:17:58.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"Add support for headerless Ethornell script format\n\nAlso improve furigana dumping","shortMessageHtmlLink":"Add support for headerless Ethornell script format"}},{"before":"63497ab61baf497afa87d56287f6adbc70d5fd19","after":"1b4ecf735c7c97f769967270215dc6228724c3e0","ref":"refs/heads/master","pushedAt":"2024-08-29T19:16:24.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"Minor fixes","shortMessageHtmlLink":"Minor fixes"}},{"before":"7d3ddb39e171becc595144df173b3acde2149848","after":"63497ab61baf497afa87d56287f6adbc70d5fd19","ref":"refs/heads/master","pushedAt":"2024-08-27T12:17:57.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"Improve route finalization and restart scenario\n\nAdd Ctrl+F command for manual segment finalization at the end of route.\nAllow reloading empty history from arbitrary position.","shortMessageHtmlLink":"Improve route finalization and restart scenario"}},{"before":"849d21572905849c398409dcadf46242f26c99bd","after":"7d3ddb39e171becc595144df173b3acde2149848","ref":"refs/heads/master","pushedAt":"2024-08-25T10:32:15.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"Smoother and less wastefull ahead of time translating\n\nReduce number of ahead of time lines to 6 (configurable).\nIncrease context area for shown messages from 75% to 7/8.\nMake background thread work without spurious restarts.","shortMessageHtmlLink":"Smoother and less wastefull ahead of time translating"}},{"before":"ca570853d59c530cb093de41a32193470a69633a","after":"849d21572905849c398409dcadf46242f26c99bd","ref":"refs/heads/master","pushedAt":"2024-08-25T04:47:11.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Nekotekina","name":"Ivan","path":"/Nekotekina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6028184?s=80&v=4"},"commit":{"message":"Smoother and less wastefull ahead of time translating\n\nReduce number of ahead of time lines to 6 (configurable).\nIncrease context area for shown messages from 75% to 7/8.\nMake background thread work without spurious restarts.","shortMessageHtmlLink":"Smoother and less wastefull ahead of time translating"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEuvnghQA","startCursor":null,"endCursor":null}},"title":"Activity ยท Nekotekina/VNSleuth"}