github.com/Alpha-VLLM/LLaMA2-Accessory
1 Users
0 Comments
1 Highlights
0 Notes
Tags
Top Highlights
🚀LLaMA2-Accessory is an open-source toolkit for pre-training, fine-tuning and deployment of Large Language Models (LLMs) and mutlimodal LLMs. This repo is mainly inherited from LLaMA-Adapter with more advanced features.🧠 News [2023.07.23] Initial release 📌 Features 💡Support More Datasets and Tasks 🎯 Pre-training with RefinedWeb and StarCoder. 📚 Single-modal fine-tuning with Alpaca, ShareGPT, LIMA, UltraChat and MOSS. 🌈 Multi-modal fine-tuning with image-text pairs (LAION, COYO and more), interleaved image-text data (MMC4 and OBELISC) and visual instruction data (LLaVA, Shrika, Bard) 🔧 LLM for API Control (GPT4Tools and Gorilla). ⚡Efficient Optimization and Deployment 🚝 Parameter-efficient fine-tuning with Zero-init Attenion and Bias-norm Tuning. 💻 Fully Sharded Data Parallel (FSDP), Flash Attention 2 and QLoRA. 🏋️♀️Support More Visual Encoders and LLMs 👁🗨 Visual Encoders: CLIP, Q-Former and ImageBind.
Glasp is a social web highlighter that people can highlight and organize quotes and thoughts from the web, and access other like-minded people’s learning.