This content originally appeared on HackerNoon and was authored by Computational Technology for All
:::info Authors:
(1) Xiaofan Yu, University of California San Diego, La Jolla, California, USA (x1yu@ucsd.edu);
(2) Anthony Thomas, University of California San Diego, La Jolla, California, USA (ahthomas@ucsd.edu);
(3) Ivannia Gomez Moreno, CETYS University, Campus Tijuana, Tijuana, Mexico (ivannia.gomez@cetys.edu.mx);
(4) Louis Gutierrez, University of California San Diego, La Jolla, California, USA (l8gutierrez@ucsd.edu);
(5) Tajana Šimunić Rosing, University of California San Diego, La Jolla, USA (tajana@ucsd.edu).
:::
Table of Links
8 Evaluation of LifeHD semi and LifeHDa
9 Discussions and Future Works
10 Conclusion, Acknowledgments, and References
9 DISCUSSIONS AND FUTURE WORKS
Problem Scale. One limitation of LifeHD is the relative small problem scale (e.g., the image size of CIFAR-100 is restricted to 32x32) due to the essential difficulty of unsupervised lifelong learning problem, including single-pass non-iid data and no supervision. For the same reason, there remains a disparity in accuracy between unsupervised lifelong learning and fully supervised NNs, as substantiated by prior research [13, 54]. In order to scale LifeHD to more challenging applications such as self-driving vehicles, one possible direction is to leverage the pretrained foundation model as a frozen feature extractor in the HDnn framework, which we leave for future investigation.
\ Hyperparameter Tuning. While we recognize that hyperparameters can influence the performance of LifeHD, such an issue is not exclusive to LifeHD, but has persistently been a challenge in machine learning research [7]. In LifeHD, the impact of hyperparameters can be mitigated through pre-deployment evaluation and component co-design. For example, encoding parameters such as 𝑄, 𝑃 can be tuned on similar health monitoring data sources prior to deployment. Meanwhile, the component of cluster HVs merging can increase LifeHD’s resiliency to the novelty detection threshold 𝛾, as a higher quantity of novel clusters can be merged in later stage of learning.
\ Limitations of HDC. HDC serves as the fundamental core of LifeHD. While HDC shows promise with its notable lightweight design, it is burdened by several limitations that remain active areas of research. First, for complex datasets like audio and images, HDC requires a pretrained feature extractor (the HDnn encoding) which may not exist for certain applications. Moreover, akin to any other architecture, HDC vectors face capacity limitations determined by the dimension of HD space, encoding method, and potential noise levels in the input data [56]. Due to these factors, careful evaluation and sometimes manual feature engineering are required to successfully deploy HDC for new applications.
\ Future Works. Although LifeHD focuses on single-device lifelong learning for classification tasks, the method can be extended for other types of tasks and learning settings, such as federated learning and reinforcement learning. We leave the investigation of these topics for future work.
\
:::info This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.
:::
\
This content originally appeared on HackerNoon and was authored by Computational Technology for All
Computational Technology for All | Sciencx (2024-07-24T20:00:19+00:00) Lifelong Intelligence Beyond the Edge using Hyperdimensional Computing: Discussions and Future Works. Retrieved from https://www.scien.cx/2024/07/24/lifelong-intelligence-beyond-the-edge-using-hyperdimensional-computing-discussions-and-future-works/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.