AbstractPhila PRO

AbstractPhil

AI & ML interests

datasets, research papers, experimentation, vision, classification, text encoders, tokenization, llms, diffusion, distillation, and more.

Recent Activity

replied to their post about 22 hours ago
Training and tuning a top 500k geometric vocabulary is doable, but scaling upward is highly impractical for me. This one has many logistics issues. Primarily, there's no precedent I know of to literally train hundreds of millions of potential character combinations; with their prefabricated variations of crystals to tune a specific series of trajectories in specific directions, based on the input text targeting other crystals, the weights, and the batch. The dataset needs to be properly prepared though, and I can't find any prefabricated variations of this data format that the symbolic lexical engine needs to be robust. There's a few possibilities for this one. Batch size being an obvious one, where I take a large influx of information in, then grab any matching words, characters, or information and update those using the formulas for topological tuning. The main issue is the language web is massive. BILLIONS of variations can crop up from a single document if you're not hard capping depth; so if you traverse the whole tree like say - "the quick brown fox", becomes words, becomes definitions, becomes letters - not counting multi-pass finetuning. This alone is a massive logistics nightmare to implement, but thankfully this is the modern era. Simply put; if I hard cap to 500k vocab with a depth of no more than 50,000 pentachora crystals each, it should be capable of housing the an approximate word structure within a trajectory space. I'd rather run it on a fleet of devices and feed it the pile, the book corpus, and everything else so we can get some truly trajectory related subsets of 500k+ crystals per token upward to 100,000,000 or so combinations each. The crystals really aren't that big, and they house a massive amount of context. Even so, there are many logistics nightmares to this, but it's a viable option for training a legitimate similarity-fed BERT or LLAMA meant to specifically form linguistic responses using those crystals as tuning forks for solidity.
replied to their post about 22 hours ago
Training and tuning a top 500k geometric vocabulary is doable, but scaling upward is highly impractical for me. This one has many logistics issues. Primarily, there's no precedent I know of to literally train hundreds of millions of potential character combinations; with their prefabricated variations of crystals to tune a specific series of trajectories in specific directions, based on the input text targeting other crystals, the weights, and the batch. The dataset needs to be properly prepared though, and I can't find any prefabricated variations of this data format that the symbolic lexical engine needs to be robust. There's a few possibilities for this one. Batch size being an obvious one, where I take a large influx of information in, then grab any matching words, characters, or information and update those using the formulas for topological tuning. The main issue is the language web is massive. BILLIONS of variations can crop up from a single document if you're not hard capping depth; so if you traverse the whole tree like say - "the quick brown fox", becomes words, becomes definitions, becomes letters - not counting multi-pass finetuning. This alone is a massive logistics nightmare to implement, but thankfully this is the modern era. Simply put; if I hard cap to 500k vocab with a depth of no more than 50,000 pentachora crystals each, it should be capable of housing the an approximate word structure within a trajectory space. I'd rather run it on a fleet of devices and feed it the pile, the book corpus, and everything else so we can get some truly trajectory related subsets of 500k+ crystals per token upward to 100,000,000 or so combinations each. The crystals really aren't that big, and they house a massive amount of context. Even so, there are many logistics nightmares to this, but it's a viable option for training a legitimate similarity-fed BERT or LLAMA meant to specifically form linguistic responses using those crystals as tuning forks for solidity.
replied to their post about 22 hours ago
Training and tuning a top 500k geometric vocabulary is doable, but scaling upward is highly impractical for me. This one has many logistics issues. Primarily, there's no precedent I know of to literally train hundreds of millions of potential character combinations; with their prefabricated variations of crystals to tune a specific series of trajectories in specific directions, based on the input text targeting other crystals, the weights, and the batch. The dataset needs to be properly prepared though, and I can't find any prefabricated variations of this data format that the symbolic lexical engine needs to be robust. There's a few possibilities for this one. Batch size being an obvious one, where I take a large influx of information in, then grab any matching words, characters, or information and update those using the formulas for topological tuning. The main issue is the language web is massive. BILLIONS of variations can crop up from a single document if you're not hard capping depth; so if you traverse the whole tree like say - "the quick brown fox", becomes words, becomes definitions, becomes letters - not counting multi-pass finetuning. This alone is a massive logistics nightmare to implement, but thankfully this is the modern era. Simply put; if I hard cap to 500k vocab with a depth of no more than 50,000 pentachora crystals each, it should be capable of housing the an approximate word structure within a trajectory space. I'd rather run it on a fleet of devices and feed it the pile, the book corpus, and everything else so we can get some truly trajectory related subsets of 500k+ crystals per token upward to 100,000,000 or so combinations each. The crystals really aren't that big, and they house a massive amount of context. Even so, there are many logistics nightmares to this, but it's a viable option for training a legitimate similarity-fed BERT or LLAMA meant to specifically form linguistic responses using those crystals as tuning forks for solidity.
View all activity

Organizations

DeepGHS's profile picture BangumiBase's profile picture Abstract Powered Research's profile picture