Open in app

Sign In

Write

Sign In

Danyal Jamil
Danyal Jamil

65 Followers

Home

About

Published in The Startup

·Pinned

Just Finished a Paid Career Path, Am I a Web Developer Now?

What you think vs What is real. — Suck behind the paywall? Click here to read the full story with my friend link! So, 3 months ago, I woke up at 3 pm, earlier than usual, thinking about which movie to watch. Opened Facebook, saw some people posting about CodeCademy offering free Pro Memberships for Covid-19. …

JavaScript

7 min read

Just Finished a Paid Career Path, Am I a Web Developer Now?
Just Finished a Paid Career Path, Am I a Web Developer Now?
JavaScript

7 min read


Jan 9

Let’s have ChatGPT in Python!

Not Davinci, we need ChatGPT itself. — So, I had to work recently to integrate ChatGPT into a Python app, and to not re-invent the wheel, as any developer would do, I started searching about it on Medium, and other websites. So, after spending a lot of time, all I could really find was implementations of The…

Chatgpt

2 min read

Let’s have ChatGPT in Python!
Let’s have ChatGPT in Python!
Chatgpt

2 min read


Sep 27, 2021

Managing state in React Application? Yeah, 8 ways to do it.

No hard and fast rule, choose the one you like. — It’s been so long since my last post on Web Development. So, here is another one. In this post, I will mention 8 ways to handle state in a React application. …

Web

5 min read

Managing state in React Application? Yeah, 8 ways to do it.
Managing state in React Application? Yeah, 8 ways to do it.
Web

5 min read


Published in Artificial Intelligence in Plain English

·May 8, 2021

DL Recap: Basic Feed Forward Network

A look into how a Multi-Layered Perception performs its feedforward. — “Deep Learning is the new Electricity”. ~ Andrew NG Yet so many people fail to understand it. The study and understanding of core concepts in Deep Learning require background knowledge in Linear Algebra, Calculus, Probability, and Statistics. …

Deep Learning

6 min read

DL Recap: Basic Feed Forward Network
DL Recap: Basic Feed Forward Network
Deep Learning

6 min read


Published in Artificial Intelligence in Plain English

·Jan 17, 2021

What’s Gradient Descent with Momentum?

A quick intro to the topic. — This article is actually a continuum of a series that focuses on the basic understanding of the building blocks of Deep Learning. Some of the previous articles are, in case you need to catch up: Multi-class Classification? Yes. Let’s discuss what is it!medium.com Why should we use Batch Normalization in Deep Learning? Let’s discuss why!becominghuman.ai Insights to How to Tune Your Hyper Parameters A guide to tuning your Hyper Parameters to get the best accuracy.d3nyal.medium.com

Deep Learning

4 min read

What’s Gradient Descent with Momentum?
What’s Gradient Descent with Momentum?
Deep Learning

4 min read


Published in The Startup

·Dec 31, 2020

Multi-Class Classification? Yes.

Let’s discuss what is it! — Stuck behind the paywall? Click here to read the full story with my friend link! This article is actually a continuum of a series that focuses on the basic understanding of the building blocks of Deep Learning. Some of the previous articles are, in case you need to catch up: Insights to how to tune your hyper parameters! A guide to tuning your Hyper Parameters to get the best accuracy.medium.com

Deep Learning

4 min read

Multi-Class Classification? Yes.
Multi-Class Classification? Yes.
Deep Learning

4 min read


Published in Becoming Human: Artificial Intelligence Magazine

·Dec 7, 2020

Why should we use Batch Normalization in Deep Learning?

Let’s discuss why! — Stuck behind the paywall? Click here to read the full story with my friend link! This article is actually a continuum of a series that focuses on the basic understanding of the building blocks of Deep Learning. Some of the previous articles are, in case you need to catch up: Want to Optimize your Model? Use Learning Rate Decay! Adapting your Learning Rate Parameter with time can make a huge difference! Let’s see how.medium.com

Deep Learning

5 min read

Why should we use Batch Normalization in Deep Learning?
Why should we use Batch Normalization in Deep Learning?
Deep Learning

5 min read


Published in DataDrivenInvestor

·Nov 26, 2020

Insights to How to Tune Your Hyper Parameters

A guide to tuning your Hyper Parameters to get the best accuracy. — Stuck behind the paywall? Click here to read the full story with my friend link! This article is actually a continuum of a series that focuses on the basic understanding of the building blocks of Deep Learning. Some of the previous articles are, in case you need to catch up: Want to Optimize your Model? Use Learning Rate Decay! Adapting your Learning Rate Parameter with time can make a huge difference! Let’s see how.medium.com

Deep Learning

4 min read

Insights to How to Tune Your Hyper Parameters
Insights to How to Tune Your Hyper Parameters
Deep Learning

4 min read


Published in Towards AI

·Nov 12, 2020

Want to Optimize your Model? Use Learning Rate Decay!

Adapting your Learning Rate Parameter with time can make a huge difference! Let’s see how. — Stuck behind the paywall? Click here to read the full story with my friend link! This article is actually a continuum of a series that focuses on the basic understanding of the building blocks of Deep Learning. Some of the previous articles are, in case you need to catch up: Want your model to converge faster? Use RMSProp! This is another technique used to speed up Training.medium.com

Deep Learning

5 min read

Want to Optimize your Model? Use Learning Rate Decay!
Want to Optimize your Model? Use Learning Rate Decay!
Deep Learning

5 min read


Published in Analytics Vidhya

·Nov 2, 2020

Want your model to converge faster? Use RMSProp!

This is used to speed up Gradient Descent. — Stuck behind the paywall? Click here to read the full story with my friend link! This article is actually a continual of previous series of articles. Here are links to the stories, if you will to come along. Model Overfitting? Use L2 Regularization! Use this to enhance your Deep Learning models!medium.com Training taking too long? Use Exponentially Weighted Averages! Use this optimization to speed up your training!medium.com

Deep Learning

5 min read

Want your model to converge faster? Use RMSProp!
Want your model to converge faster? Use RMSProp!
Deep Learning

5 min read

Danyal Jamil

Danyal Jamil

65 Followers

Machine Learning Enthusiast | Student

Help

Status

Writers

Blog

Careers

Privacy

Terms

About

Text to speech