REST Labs Interdisciplinary Research Laboratory
Interdisciplinary Research Laboratory

Data Analytics Lab

Data Analytics Lab - Future of Intelligence

DATA ANALYTICS LAB

Pioneering the future of decision-making with advanced data analysis, optimization techniques, and statistical modeling.

EXPLORE OUR METHODS

OUR EXPERT TEAM

INCHARGE

Chandrasekar Raja

Core Scientist

LAB STAFF

Ramya Sharma

Associate Scientist

Nathiya Murali

Associate Scientist

Arunambigai Ramesh

Associate Scientist

Poonkodi Sathiyamoorthy

Associate Scientist

A Tamilarasan

Associate Scientist

OUR ANALYSIS METHODOLOGIES

MULTI-CRITERIA DECISION MAKING (MCDM)

Weight Allocation Methods

Weight allocation methods are essential in multi-criteria decision-making (MCDM) to assign significance to criteria. These methods support robust and rational decision-making by determining the relative importance of each criterion in complex analytical ecosystems.

Mean Weight Method Standard Deviation Method Entropy Method AHP Method CRITIC Method Best-Worst Method

Additional MCDM Methods

Additional MCDM methods enhance decision-making by evaluating alternatives based on multiple criteria. These approaches provide diverse analytical strategies to support nuanced and context-sensitive evaluations in complex scenarios.

TOPSIS COPRAS EDAS ELECTRE DEMATEL PROMETHEE VIKOR GRA MOORA ARAS WPM WSM WASPAS Fuzzy TOPSIS Fuzzy ARAS

STATISTICAL ANALYSIS (SPSS)

Descriptive Statistics

Descriptive statistics summarise and organise data through measures such as mean, median, mode, variance, and standard deviation. These tools are fundamental for understanding the distribution, central tendency, and variability within datasets.

Frequency Distribution Central Tendency Dispersion Measures Crosstabs

Inferential Statistics

Inferential statistics draw conclusions about a population based on sample data using techniques like hypothesis testing, confidence intervals, and regression analysis. They help make predictions, determine relationships, and assess statistical significance, allowing generalisations beyond the observed dataset with a quantifiable degree of uncertainty.

t-Test ANOVA MANOVA Chi-Square Test Correlation Regression Analysis

Non-Parametric Tests

Non-parametric tests analyse data without assuming a specific distribution, making them suitable for small samples and ordinal or skewed data.

Mann-Whitney U Test Wilcoxon Signed-Rank Test Kruskal-Wallis Test Friedman Test

Factor & Reliability Analysis

Factor analysis identifies underlying relationships between variables by grouping them into latent factors, reducing dimensionality while preserving key information. Reliability analysis assesses the consistency of a measurement scale, often using Cronbach’s alpha, ensuring the instrument’s dependability in capturing true constructs without excessive random error.

PCA EFA CFA Cronbach's Alpha

Multivariate Analysis

Multivariate analysis examines multiple variables simultaneously to uncover patterns, relationships, and dependencies. It is widely used in fields like finance, marketing, and social sciences for complex decision-making.

Cluster Analysis Discriminant Analysis MDS SEM

Time Series & Forecasting

Time series analysis examines data points collected sequentially over time to identify trends, seasonal patterns, and cyclic behaviours. Forecasting uses models like ARIMA, exponential smoothing, and machine learning techniques to predict future values, aiding decision-making in finance, economics, and supply chain management.

ARIMA Exponential Smoothing Trend Analysis

Data Reduction & Transformation

Data reduction simplifies datasets by eliminating redundancy while preserving essential information, using techniques like PCA, factor analysis, and feature selection. Data transformation converts data into a suitable format through normalisation, standardisation, or logarithmic scaling, enhancing interpretability, model performance, and ensuring consistency in statistical analysis.

Data Imputation Normalization Standardization Dummy Variable Creation

Structural Equation Model

Structural Equation Modelling (SEM) is a multivariate technique that analyses complex relationships among observed and latent variables. Combining factor analysis and regression, SEM evaluates causal relationships using path diagrams, fit indices, and estimation methods like Maximum Likelihood, widely applied in psychology, economics, and social sciences.

Model Identification Model Fitness Testing Multi-Group Analysis Multicollinearity Test Path Analysis

MACHINE LEARNING ALGORITHMS

Optimisation Algorithms (Jupyter Notebook)

Optimisation algorithms are effective methods for solving optimisation problems, aiming to determine the most efficient solution within given constraints. These algorithms are widely utilised across diverse fields, including engineering and operations research, to enhance efficiency and decision-making by systematically evaluating possible solutions.

OLS Regression Polynomial Regression Ridge Regularization Lasso Regression Elastic Net Bayesian Ridge Decision Trees Random Forest Gradient Boosting XGBoost AdaBoost SVM Gaussian Process MLP Regressor