1. Working with JSON
Python works heavily with JSON in APIs and AI pipelines.
1.1 Parsing JSON
import json json_string = '{"name": "Vivek", "age": 28}' data = json.loads(json_string) print(data["name"])
1.2 Converting to JSON
data = {"name": "Vivek", "age": 28} json_string = json.dumps(data)
Useful in:
- Azure Functions
- FastAPI
- RAG APIs
2. Making HTTP Calls (requests library)
Install:
pip install requests
2.1 Basic GET
import requests response = requests.get("https://api.example.com/data") if response.status_code == 200: print(response.json())
2.2 POST Request
payload = {"name": "Vivek"} response = requests.post( "https://api.example.com/data", json=payload )
Used heavily in:
- Calling OpenAI APIs
- Calling Azure REST APIs
3. FastAPI (Modern Python Backend Framework)
Lightweight & fast.
Install:
pip install fastapi uvicorn
3.1 Simple API
from fastapi import FastAPI app = FastAPI() @app.get("/") def read_root(): return {"message": "Hello World"}
Run:
uvicorn main:app --reload
3.2 Path Parameters
@app.get("/items/{item_id}") def read_item(item_id: int): return {"item_id": item_id}
3.3 Pydantic Models
from pydantic import BaseModel class Item(BaseModel): name: str price: float @app.post("/items/") def create_item(item: Item): return item
Very relevant for AI APIs.
4. Async in Python
Important for AI + API systems.
4.1 Basic Async
import asyncio async def say_hello(): await asyncio.sleep(1) print("Hello") asyncio.run(say_hello())
4.2 Async API Endpoint (FastAPI)
@app.get("/") async def read_root(): return {"message": "Hello"}
When to Use Async?
- I/O bound tasks
- Calling LLM APIs
- Database queries
Not ideal for heavy CPU tasks.
5. Threading vs Multiprocessing
5.1 Threading
import threading def task(): print("Running") t = threading.Thread(target=task) t.start()
Limited by GIL for CPU tasks.
5.2 Multiprocessing
from multiprocessing import Process def task(): print("Running") p = Process(target=task) p.start()
True parallel execution.
6. Python in RAG Workflows
Python is dominant in AI ecosystem.
Typical RAG stack:
Load docs ā Chunk ā Embed ā Store in Vector DB ā Retrieve ā LLM
6.1 Example Embedding Call (Conceptual)
response = client.embeddings.create( input="Hello world", model="text-embedding-3-small" ) vector = response.data[0].embedding
6.2 Simple Retrieval Concept
query_embedding = embed(query) results = vector_db.search(query_embedding, top_k=5)
6.3 Prompt Augmentation
context = "\n".join(results) prompt = f""" Answer using only the context below: {context} Question: {query} """
This is core RAG logic.
7. Working with Azure SDK (Conceptual Overview)
Install:
pip install azure-identity azure-search-documents
7.1 Managed Identity
from azure.identity import DefaultAzureCredential credential = DefaultAzureCredential()
7.2 Azure AI Search Example
from azure.search.documents import SearchClient client = SearchClient( endpoint="YOUR_ENDPOINT", index_name="your-index", credential=credential )
8. Handling Large Data Safely
Use generators instead of loading everything.
Example:
def read_large_file(file): with open(file) as f: for line in f: yield line
Prevents memory overflow.
9. Common Python Interview Questions (FAQ)
What is GIL?
Global Interpreter Lock prevents multiple threads from executing Python bytecode simultaneously.
List vs Tuple?
List = mutable
Tuple = immutable
What is a generator?
Function that yields values lazily using
yield.What are decorators?
Functions that modify behavior of other functions.
What is duck typing?
If it behaves like a duck, treat it like a duck.
Python uses behavior-based typing, not strict interfaces.
Deep copy vs Shallow copy?
Shallow copies reference inner objects.
Deep copy clones everything.
10. Quick Cheatsheet Summary
- Dynamic typing
- Indentation defines scope
- Dict & set use hashing
- Async is I/O friendly
- GIL limits CPU threading
- Dataclasses simplify models
- FastAPI is modern backend choice
- Generators are memory efficient
