Tools Used: Python, Jupyter
Project Description: This project uses Python and machine learning to detect fraudulent credit card transactions. Various models like Logistic Regression, Decision Tree, Random Forest, XGBoost, and AdaBoost are applied. SMOTE oversampling is used to address class imbalance and improve fraud detection. Descriptive statistics help explore the dataset and identify key fraud indicators. The goal is to build an efficient model that accurately detects fraud and minimizes financial loss.
Concepts Used: Python, Jupyter Notebook, scikit-learn, Logistic Regression, Decision Tree, Random Forest, XGBoost, AdaBoost, SMOTE (Oversampling), Data Preprocessing, Descriptive Statistics, Model Evaluation (Precision, Recall, F1-Score)
GitHub: Open GitHub Repository
Tools Used: Python, Jupyter
Project Description: This project focuses on scraping the latest data science job listings from the TimesJobs website. It extracts essential details, including job titles, company names, required skills, and application links. The script is designed to run every five hours, ensuring that the data remains up to date. The extracted information is saved into a file for easy access and analysis.
Concepts Used: Web Scraping, Beautiful Soup, Requests, HTML Parsing, File Handling
GitHub: Open GitHub Repository
Tools Used: Python, Jupyter
Project Description: This project involves live web scraping of Amazon product data, including product names, prices, links, and reviews, to create a structured table. Using Beautiful Soup and Selenium, the project extracts real-time information from the Amazon website. The data is then processed and organized into a pandas DataFrame for further analysis and visualization.
Concepts Used: Web Scraping, Beautiful Soup, Selenium, Pandas, HTML Parsing
GitHub: Open GitHub Repository
Tools Used: Python, Excel
Project Description: This project involves integrating Python with Excel to automate repetitive tasks such as data cleaning, generating reports, and performing complex calculations. By leveraging Python libraries like openpyxl and pandas, the project automates tasks such as updating Excel sheets, creating dynamic reports, and applying formulas, improving efficiency and reducing manual error
Concepts Used: Python, Excel, openpyxl, Pandas, Excel Automation, Data Manipulation, Report Generation, Formula Application, File Handling.
GitHub: Open GitHub Repository
Tools Used: Python, Jupyter
Project Description: This project aims to predict the survival of passengers on the Titanic using machine learning techniques. The Titanic dataset is analyzed to identify factors influencing survival, such as age, gender, passenger class, and fare. Various machine learning models, including Logistic Regression, Decision Trees, Random Forest, and Support Vector Machines, are applied to classify passengers as survivors or non-survivors. Data preprocessing techniques like handling missing values and feature encoding are utilized to prepare the dataset for modeling. The project seeks to build an accurate predictive model to enhance understanding of survival factors and improve future safety measures.
Concepts Used: Python, Jupyter Notebook, scikit-learn, Logistic Regression, Decision Trees, Random Forest, Data Preprocessing, Feature Engineering, Model Evaluation (Accuracy, Precision, Recall, F1-Score)
GitHub: Open GitHub Repository
Sign in to your account