At a glance. AI tools from reputable companies are safe to use for everyday work — provided you follow three simple rules: never share sensitive identifiers, verify factual claims with a professional, and learn to recognise AI-driven scams. This one-page guide covers all three.
1. The Golden Rule — Never Share These With Any AI Tool
Do NOT type into ChatGPT, Gemini, Claude, Copilot, or any other AI:
- Aadhaar number, PAN number, voter-ID or driving-licence number
- Bank account number, IFSC, debit/credit card number, UPI PIN
- Any OTP — banks, NPCI and government will never ask for one through AI
- Passwords for any account, including the AI tool itself
- Full medical reports with your name and hospital ID visible
- Photographs of your home, family, or vehicles with number plates
- Personal details of grandchildren, especially school name and address
If a question needs sensitive details, replace them with placeholders before asking. For example: "Draft a complaint about transaction of Rs XX,XXX on date DD-MM-YYYY to bank XYZ." Fill in the real numbers only in the final printed letter.
2. AI Hallucinations — When AI Sounds Confident but Is Wrong
AI sometimes makes up facts with complete confidence. It might quote a non-existent court case, invent a medicine dose, or cite a fake research paper. Hallucinations are most dangerous in three areas:
- Medical: always verify symptoms, doses and treatments with a qualified doctor.
- Legal: have any AI-drafted legal letter or claim reviewed by an advocate.
- Financial: never act on AI's tax, investment or insurance advice without a CA or SEBI-registered adviser.
3. AI-Driven Scams Targeting Seniors
Scammers are now using AI tools too. Watch for these in 2026:
- Deepfake voice calls. A scammer clones your son's or daughter's voice from a 10-second WhatsApp video and calls you saying "Papa, I am in trouble, send Rs 50,000 now." Defence: hang up, call your child's normal number, confirm.
- AI-generated phishing emails. These are now grammatically perfect and personalised — they may quote your real ex-employer or society name. Defence: never click links inside emails. Type the bank or website address yourself.
- Fake AI investment tips. WhatsApp groups promising "AI trading bot — 20% monthly returns". Defence: if it's not SEBI-registered, it is illegal.
- Romance and friendship scams. AI chatbots impersonating people on Facebook and dating apps. Defence: never send money to anyone you have not met in person.
4. Recognising AI-Generated Content
Three quick tells: (1) photos with hands, ears or teeth that look slightly off; (2) videos where lip-sync is half a second behind the voice; (3) text with no spelling mistakes but oddly generic phrasing ("very important", "key insights", "leveraging opportunities"). When in doubt, do a Google reverse-image search on photos.
5. Practical Account Security
- Use a separate strong password for your AI account — not the same as Gmail or banking.
- Enable two-factor authentication in ChatGPT/Gemini settings.
- Review your chat history monthly and delete anything personal.
- In ChatGPT, go to Settings → Data Controls → turn OFF "Improve the model for everyone" if you want extra privacy.
- Sign out on shared devices. Never stay logged in on a cyber-cafe machine.
6. Indian-Context Safety Checklist (Print This)
- ☐ No Aadhaar, PAN or bank details ever typed into AI
- ☐ No OTPs shared — with anyone, ever, period
- ☐ Family code word agreed with children for "emergency" calls
- ☐ Medical AI answers verified with my doctor before acting
- ☐ Legal AI drafts reviewed by an advocate before sending
- ☐ Investment AI tips ignored unless SEBI-registered
- ☐ AI account has its own strong password + 2FA on
- ☐ Chat history reviewed and cleaned monthly
- ☐ Suspicious calls reported to Cyber Crime helpline 1930
- ☐ Photos with hands/faces re-checked before believing them
7. Where to Report
If you fall victim to a cyber scam — even a small one — report it within 24 hours:
- Cyber Crime Helpline: Dial 1930 (24x7, national)
- Online portal: cybercrime.gov.in
- Bank: call the number on the back of your card, ask to block immediately
8. When in Doubt — One Question to Ask Yourself
Before sharing anything with AI or acting on what it tells you, ask: "Would I be comfortable if this conversation was read out at my retirement club tomorrow morning?" If the answer is no, don't share it. If you wouldn't trust the answer enough to bet money on it without verification, verify it.
Try this now. Open the WhatsApp group with your immediate family. Propose a simple family code word — one that only the four of you know — to be used during any emergency call. This single step defeats the most dangerous deepfake voice scam targeting Indian seniors right now.
Key Takeaways
- Never share Aadhaar, OTP, bank details, passwords or full medical reports with AI.
- Verify medical, legal and financial AI answers with a qualified professional.
- Watch for deepfake voice calls — agree a family code word today.
- Use a separate strong password and 2FA for your AI account.
- Report cyber crime within 24 hours on 1930 or cybercrime.gov.in.
Related Resources