├── README.md ├── example1.py ├── main.py └── requirements.txt /README.md: -------------------------------------------------------------------------------- 1 | Here's an improved version with detailed steps, proper markdown formatting, and clear instructions: 2 | 3 | --- 4 | 5 | # Langchain-Transformers-Python 6 | 7 | This guide walks you through setting up a Python environment, installing dependencies, configuring GPU usage, and running a transformer model with LangChain. 8 | 9 | --- 10 | 11 | ## 1. Create a Virtual Environment 12 | 13 | Creating a virtual environment helps isolate dependencies and prevents conflicts with other Python projects. 14 | 15 | ### **For Windows (Command Prompt)** 16 | ```sh 17 | python -m venv langchain-env 18 | langchain-env\Scripts\activate 19 | ``` 20 | 21 | ### **For macOS/Linux (Terminal)** 22 | ```sh 23 | python -m venv langchain-env 24 | source langchain-env/bin/activate 25 | ``` 26 | 27 | --- 28 | 29 | ## 2. Install Requirements 30 | 31 | Once the virtual environment is activated, install the required dependencies. 32 | 33 | ```sh 34 | pip install langchain transformers langchain-huggingface 35 | ``` 36 | 37 | --- 38 | 39 | ## 3. Configure GPU Usage 40 | 41 | If you have an NVIDIA GPU, install the CUDA-enabled version of PyTorch. 42 | 43 | Run the following command (replacing `cu126` with your CUDA version): 44 | 45 | ```sh 46 | pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126 47 | ``` 48 | 49 | To check which CUDA version you have installed, run: 50 | 51 | ```sh 52 | nvcc --version 53 | ``` 54 | 55 | If you don’t have CUDA installed, follow the official installation guide: 56 | 🔗 [CUDA Installation Guide](https://developer.nvidia.com/cuda-downloads) 57 | 58 | --- 59 | 60 | ## 4. Check for GPU Availability 61 | 62 | Run the following Python code to verify that your GPU is available: 63 | 64 | ```python 65 | import torch 66 | 67 | # Check if GPU is available 68 | gpu_available = torch.cuda.is_available() 69 | device_name = torch.cuda.get_device_name(0) if gpu_available else "No GPU found" 70 | 71 | print(f"GPU Available: {gpu_available}") 72 | print(f"GPU Name: {device_name}") 73 | ``` 74 | 75 | If `torch.cuda.is_available()` returns `False`, ensure that: 76 | - You have an NVIDIA GPU. 77 | - The correct version of CUDA is installed. 78 | - You installed the CUDA-enabled version of PyTorch. 79 | 80 | --- 81 | 82 | ## 5. Set Device in Pipeline 83 | 84 | Once GPU availability is confirmed, specify the device in the transformer pipeline. 85 | 86 | ```python 87 | from transformers import pipeline 88 | 89 | # Load the model and set device to GPU (device=0) 90 | model = pipeline( 91 | "text-generation", 92 | model="mistralai/Mistral-7B-Instruct-v0.1", 93 | device=0 # Use GPU (0 refers to the first GPU) 94 | ) 95 | 96 | # Generate text 97 | output = model("What is LangChain?") 98 | print(output) 99 | ``` 100 | 101 | If using a CPU instead of a GPU, change `device=0` to `device=-1`. 102 | 103 | --- 104 | 105 | ## 🎯 Summary 106 | 107 | | Step | Command / Code | 108 | |--------------------------|------------------------------------------------| 109 | | **Create a Virtual Env** | `python -m venv langchain-env && source langchain-env/bin/activate` | 110 | | **Install Requirements** | `pip install langchain transformers langchain-huggingface` | 111 | | **Install GPU Support** | `pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126` | 112 | | **Check GPU Availability** | `print(torch.cuda.is_available())` | 113 | | **Run Model on GPU** | `pipeline("text-generation", model="mistralai/Mistral-7B-Instruct-v0.1", device=0)` | 114 | 115 | --- 116 | 117 | Now you’re ready to use **LangChain and Transformers** with GPU acceleration! 🚀 118 | -------------------------------------------------------------------------------- /example1.py: -------------------------------------------------------------------------------- 1 | from transformers import pipeline 2 | from langchain_huggingface import HuggingFacePipeline 3 | from langchain.prompts import PromptTemplate 4 | from transformers.utils.logging import set_verbosity_error 5 | set_verbosity_error() 6 | 7 | # Load Hugging Face Summarization Pipeline 8 | model = pipeline("summarization", model="facebook/bart-large-cnn", device=0) 9 | 10 | # Wrap it inside LangChain 11 | llm = HuggingFacePipeline(pipeline=model) 12 | 13 | # Create the prompt template for summarization 14 | template = PromptTemplate.from_template( 15 | "Summarize the following text in a way a {age} year old would understand:\n\n{text}" 16 | ) 17 | 18 | summarizer_chain = template | llm 19 | 20 | # Get user input 21 | text_to_summarize = input("\nEnter text to summarize:\n") 22 | age = input("Enter target age for simplification:\n") 23 | 24 | # Execute the summarization chain 25 | summary = summarizer_chain.invoke({"text": text_to_summarize, "age": age}) 26 | 27 | print("\n🔹 **Generated Summary:**") 28 | print(summary) 29 | -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | from transformers import pipeline 2 | from langchain_huggingface import HuggingFacePipeline 3 | from langchain.prompts import PromptTemplate 4 | from transformers.utils.logging import set_verbosity_error 5 | 6 | set_verbosity_error() 7 | 8 | summarization_pipeline = pipeline("summarization", model="facebook/bart-large-cnn", device=0) 9 | summarizer = HuggingFacePipeline(pipeline=summarization_pipeline) 10 | 11 | refinement_pipeline = pipeline("summarization", model="facebook/bart-large", device=0) 12 | refiner = HuggingFacePipeline(pipeline=refinement_pipeline) 13 | 14 | qa_pipeline = pipeline("question-answering", model="deepset/roberta-base-squad2", device=0) 15 | 16 | summary_template = PromptTemplate.from_template("Summarize the following text in a {length} way:\n\n{text}") 17 | 18 | summarization_chain = summary_template | summarizer | refiner 19 | 20 | text_to_summarize = input("\nEnter text to summarize:\n") 21 | length = input("\nEnter the length (short/medium/long): ") 22 | 23 | summary = summarization_chain.invoke({"text": text_to_summarize, "length": length}) 24 | 25 | print("\n🔹 **Generated Summary:**") 26 | print(summary) 27 | 28 | while True: 29 | question = input("\nAsk a question about the summary (or type 'exit' to stop):\n") 30 | if question.lower() == "exit": 31 | break 32 | 33 | qa_result = qa_pipeline(question=question, context=summary) 34 | 35 | print("\n🔹 **Answer:**") 36 | print(qa_result["answer"]) 37 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | transformers 2 | langchain 3 | langchain-huggingface --------------------------------------------------------------------------------