# <03> GRADIENT PASTLASH

{% embed url="<https://youtu.be/US41FuoLIZs>" %}

## GRADIENT PASTLASH (DESCENT)

![](https://1085599582-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MNXmrUsLXzvAuiwnR_E%2F-MQf_0J61xdHB8138NSO%2F-MQfb7cFR49Z0EHvUN6u%2Fgradient_descent_line_graph.gif?alt=media\&token=2009be2a-1067-4b8b-b860-ef61792de32e)

**Gradient Descent** algoritmi weight **"w"** ning optimal qiymatini topishda qo'llaniladigan algoritmdir. Sodda qilib aytganda, bu algoritm  bizlarga avtomatik tarzda optimal yechimni topib beradi. Ushbu algortim nafaqat **w** ning qiymatini topishda balki keyinchalik **cost function**(kelasi darslarimizda o'rganamiz) ni hisoblashda ham qo'llaniladi.&#x20;

## MATEMATIK FORMULASI (LOSS UCHUN)

![](https://1085599582-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MNXmrUsLXzvAuiwnR_E%2F-MQj23E5FAfTyx-4Xj7-%2F-MQj3---y-6LnvbtEnix%2Fimage.png?alt=media\&token=0f723e67-63df-434e-b6a9-2412edf667f2)

## <img src="https://1085599582-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MNXmrUsLXzvAuiwnR_E%2F-MP8AgwsLD-q2EQTEJPx%2F-MP8BpNTUwnLvpov9gYN%2Fworker.png?alt=media&#x26;token=bbe13d47-7dd1-483a-824b-0a47567ca244" alt="" data-size="line"> AMALIYOT

```python
# Training Data(O'rgatishdagi ma'lumotlar)
x_soat = [1.0, 2.0, 3.0]
y_baho = [2.0, 4.0, 6.0]

w = 1.0  #w uchun dastalbki taxminiy qiymat


# (Modelimiz)To'g'ri hisoblash uchun funksiya
def forward(x):
    return x * w


# Xatolik (Loss) ning funkisyasi
def loss(x, y):
    y_pred = forward(x)
    return (y_pred - y) * (y_pred - y)


# Gradient uchun funksiya
def gradient(x, y):  # d_loss/d_w
    return 2 * x * (x * w - y)


# Training dan avval
print("Bashorat (training dan avval)",  "4 soat o'qilganda:", forward(4))

# Training zanjiri (loop)
learning_rate =0.01
for epoch in range(10):
    for x_hb_qiym, y_hb_qiym in zip(x_soat, y_baho):
        # Hosilani hisoblash
        # w ning qiymatini yangilash
        # xatolikni hisoblab progressni chop qilish
        grad = gradient(x_hb_qiym, y_hb_qiym)
        w = w - learning_rate * grad
        print("\tgrad: ", x_hb_qiym, y_hb_qiym, round(grad, 2))
        l = loss(x_hb_qiym, y_hb_qiym)
    print("progress:", epoch, "w=", round(w, 2), "loss=", round(l, 2))

# Traningdan so'ng
print("Bashorat (training dan keyin)",  "4 saot o'qilganda: ", forward(4))
```

## <img src="https://1085599582-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MNXmrUsLXzvAuiwnR_E%2F-MQRh3BXh494mAMFZd5Z%2F-MQRhqm0AV-Z33WBTWBa%2Ffolder.png?alt=media&#x26;token=5aee8796-e77c-4f96-b723-7355cd5a4259" alt="" data-size="line"> MATERIALLAR

{% file src="<https://1085599582-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MNXmrUsLXzvAuiwnR_E%2F-MW-45igmeSu4YNqG1-o%2F-MW-4jnzO5InRQZWWVon%2F3-lecture(Gradient%20Descent).pdf?alt=media&token=034ff36f-69d7-4d9f-a673-8000ad2ee39f>" %}
3-darslik. Slayd
{% endfile %}
