18video性欧美19sex,欧美高清videosddfsexhd,性少妇videosexfreexxx片中国,激情五月激情综合五月看花,亚洲人成网77777色在线播放

0
  • 聊天消息
  • 系統(tǒng)消息
  • 評(píng)論與回復(fù)
登錄后你可以
  • 下載海量資料
  • 學(xué)習(xí)在線課程
  • 觀看技術(shù)視頻
  • 寫文章/發(fā)帖/加入社區(qū)
會(huì)員中心
創(chuàng)作中心

完善資料讓更多小伙伴認(rèn)識(shí)你,還能領(lǐng)取20積分哦,立即完善>

3天內(nèi)不再提示

第三章:訓(xùn)練圖像估計(jì)光照度算法模型

Red Linux ? 來源:Red Linux ? 作者:Red Linux ? 2024-11-06 15:57 ? 次閱讀
加入交流群
微信小助手二維碼

掃碼添加小助手

加入工程師交流群

前言

這一篇就到了圖像估計(jì)光照度算法章節(jié),這篇我主要記錄如何使用 tensorflow2 訓(xùn)練一個(gè)從圖片中估計(jì)光照度的算法。一般的流程是拍攝多張圖片以及用光照度計(jì)來檢測(cè)其光照度值,分別作為輸入和輸出。但是在本章呢,為了起到演示的作用,數(shù)據(jù)集我會(huì)使用 [MIT-Adobe FiveK Dataset] 。光照度值呢,我使用圖片的 rgb 數(shù)值經(jīng)過算法r0.2126+g0.7152+b*0.0722計(jì)算亮度。這樣就有了一定數(shù)量的數(shù)據(jù)集。也就有基礎(chǔ)進(jìn)行后續(xù)的訓(xùn)練和測(cè)試了。下面準(zhǔn)備進(jìn)入正文。

數(shù)據(jù)獲取

因?yàn)?MIT-Adobe FiveK Dataset 數(shù)據(jù)集包含了 5000 張?jiān)?dng 圖像和 5 和專家A,B,C,D,E進(jìn)行處理之后的 tiff 圖像(一般地,這個(gè)數(shù)據(jù)集是用來訓(xùn)練圖像增強(qiáng)相關(guān)的模型的,我這里就用來訓(xùn)練光照度估計(jì)算法了,嘿嘿)。因?yàn)橥暾臄?shù)據(jù)壓縮包太大了~50GB。受限電腦的容量和速度,我選擇了使用腳本逐個(gè)下載這些圖片(因?yàn)檫@些圖片的下載路徑有規(guī)律,再加上這些圖片的名字在官網(wǎng)可以下載下來,所以腳本就讀取包含圖片名字的文件,然后逐個(gè)拼接下載路徑,使用 curl 工具完成下載)。這里,我選擇了下載原始 dng 圖片和專家 C 的 tiff 圖片。
下載 dng 原始文件的腳本是:

#!/usr/bin/bash

#改變當(dāng)前工作路徑
CURRENT_PATH="/home/red/Downloads/fivek_dataset/expertc"
#本文件所在路徑
cd ${CURRENT_PATH}
#改變當(dāng)前路徑

#存儲(chǔ)圖像名稱的list
img_lst=[]
#讀取圖片名列表
files_name=`cat filesAdobe.txt`
files_mit_name=`cat filesAdobeMIT.txt`

j=0
for i in ${files_name};do
    # https://data.csail.mit.edu/graphics/fivek/img/dng/a0001-jmac_DSC1459.dng
    URL='https://data.csail.mit.edu/graphics/fivek/img/dng/'${i}'.dng'
    file_cur=${URL##*/}
    echo "Downloading ${URL}@${j}"
    j=$((j+1))
    if [ -f ${file_cur} ];then
        echo "${file_cur} exist"
    else
        # echo "${file_cur} no exist, it's you"
        # break
        curl -O ${URL}
    fi
done

下載專家 C 處理后的文件腳本是:

#!/usr/bin/bash

#改變當(dāng)前工作路徑
CURRENT_PATH="/home/red/Downloads/fivek_dataset/expertc"
#本文件所在路徑
cd ${CURRENT_PATH}
#改變當(dāng)前路徑

#存儲(chǔ)圖像名稱的list
img_lst=[]
#讀取圖片名列表
files_name=`cat filesAdobe.txt`
files_mit_name=`cat filesAdobeMIT.txt`

j=0
for i in ${files_name};do
    #下載由 expert C 所調(diào)整的圖像(可根據(jù)需要下載其它的四類圖像)
    URL='https://data.csail.mit.edu/graphics/fivek/img/tiff16_c/'${i}'.tif'
    file_cur=${URL##*/}
    echo "Downloading ${URL}@${j}"
    j=$((j+1))
    if [ -f ${file_cur} ];then
        echo "${file_cur} exist"
    else
        echo "${file_cur} no exist, it's you"
        # break
        curl -O ${URL}
    fi
done

經(jīng)過了好幾天斷斷續(xù)續(xù)的下載,最后我一共得到了 1000 張左右圖片。有了圖片之后,下一步就是計(jì)算光照度了,這里使用 python 腳本和 pillow 包完成,為了后續(xù)移植到 AI300G 上,我將圖片縮放到了統(tǒng)一的 255*255。并且將計(jì)算的光照度和圖像的名稱存儲(chǔ)到一個(gè) csv 文件。這部分腳本如下:

#!/bin/env python3

import sys
import csv
import os
import re

from PIL import Image

gs_illumiance_csv_file_fd=0
gs_illumiance_csv_file_name='illumiance_estimate.csv'
gs_illumiance_data_list=[['Name', 'Illuminance']]
DEST_DIR_NAME=r'PNG255'

def illuname_estimate(t):
    r,g,b=t
    return r*0.2126+g*0.7152+b*0.0722


def get_pic_pixels(pic_name):
    with Image.open(pic_name) as pic:
        ans=0
        pic=pic.resize((255,255))
        print(f'raw name:{pic_name}')
        match=re.match(r'w+/(S+).w+', pic_name)
        if match:
            basename=match.group(1)
            basename=DEST_DIR_NAME+'/'+basename+'.png'
            print(f'new name:{basename}')
            pic.save(basename)
            #  pic.show()
        width, heigh = pic.size
        for x in range(width):
            for y in range(heigh):
                r, g, b = pic.getpixel((x, y))
                ans=ans+illuname_estimate((r,g,b))

    # 光照度取整
    ans=round(ans)
    print(f'{pic_name}: illuname ans:{ans}')
    return ans

def insert_item(pic_name, illumiance_estimate):
    global gs_illumiance_csv_file_fd
    global gs_illumiance_csv_file_name
    global gs_illumiance_data_list
    item_template=['NONE', -1]
    item_template[0]=pic_name
    item_template[1]=illumiance_estimate
    gs_illumiance_data_list.append(item_template)

def do_with_dir(dir_name):
    for filename in os.listdir(dir_name):
        filepath=os.path.join(dir_name, filename)
        if (os.path.isfile(filepath)):
            print("do input %s" %(filepath))
            ans=get_pic_pixels(filepath)
            insert_item(filename, ans)
            #  return

if len(sys.argv) > 1:
    print("do input dir:%s" %(sys.argv[1]))
    if not os.path.exists(DEST_DIR_NAME):
        os.makedirs(DEST_DIR_NAME)
    do_with_dir(sys.argv[1])
    gs_illumiance_csv_file_fd=open(gs_illumiance_csv_file_name, 'w', newline='')
    csv.writer(gs_illumiance_csv_file_fd).writerows(gs_illumiance_data_list)
else:
    print("Please input pic name")

這樣就得到了類似下面的數(shù)據(jù)集:

? head illumiance_estimate.csv
Name,Illuminance
a0351-MB_070908_006_dng.jpeg,3680630
a0100-AlexWed07-9691_dng.jpeg,1258657
a0147-kme_333.jpeg,5168820
a0261-_DSC2228_dng.jpeg,2571498
a0255-_DSC1448.jpeg,8747593
a0054-kme_097.jpeg,5351908
a0393-_DSC0040.jpeg,1783394
a0304-dgw_137_dng.jpeg,3118835
a0437-jmacDSC_0011.jpeg,6140107

至此有了一定數(shù)量的數(shù)據(jù)集(這里我使用了667張照片),接下來就是模型訓(xùn)練了。

模型訓(xùn)練

模型訓(xùn)練的基本思想就是,首先將數(shù)據(jù)集按比例(4:1)拆分為訓(xùn)練集和測(cè)試集,然后使用 tensorflow 建立模型訓(xùn)練參數(shù)進(jìn)行檢驗(yàn)。
大概流程是:

  1. 首先是根據(jù) csv 文件建立 tensorflow dataset 格式的數(shù)據(jù)集;
  2. 建立模型使用數(shù)據(jù)集進(jìn)行模型訓(xùn)練和測(cè)試

這部分代碼為:

#!/usr/bin/python3.11

TF_ENABLE_ONEDNN_OPTS=0

import numpy as np
import os
import PIL
import PIL.Image
import tensorflow as tf
import pathlib
import csv
import pandas as pd
import tensorflow.data
import sys
import matplotlib.pyplot as plt

AUTOTUNE=tensorflow.data.AUTOTUNE
BATCH_SIZE=32
IMG_WIDTH=255
IMG_HEIGHT=255
ILLUMINACE_FILE=r'illumiance_estimate.csv'
print(tf.__version__)

import tensorflow as tf
import pandas as pd

image_count = len(os.listdir(r'JP'))
print(f'whole img count={image_count}')
# 假設(shè)CSV文件有兩列:'image_path' 和 'label'
df = pd.read_csv(ILLUMINACE_FILE)

# 將DataFrame轉(zhuǎn)換為TensorFlow可以處理的格式
image_paths = df['Name'].values
labels = df['Illuminance'].values
labels = labels.astype(np.float32)
labels /= 16777215.0

# 創(chuàng)建一個(gè)Dataset
gs_dataset = tf.data.Dataset.from_tensor_slices((image_paths, labels))

print(type(gs_dataset))
print(gs_dataset)
print(r'-------------------------------------------')
# 定義一個(gè)函數(shù)來加載和預(yù)處理圖像
def load_and_preprocess_image(image_path, label):
    print(image_path)
    image_path='JP/'+image_path
    image = tf.io.read_file(image_path)
    image = tf.image.decode_jpeg(image, channels=3)
    image = tf.image.resize(image, [IMG_WIDTH, IMG_HEIGHT])
    #  image /= 255.0  # 歸一化
    return image, label

# 應(yīng)用這個(gè)函數(shù)到Dataset上
gs_dataset = gs_dataset.map(load_and_preprocess_image)
# 打亂數(shù)據(jù)
gs_dataset = gs_dataset.shuffle(image_count, reshuffle_each_iteration=False)

val_size = int(image_count * 0.2)

gs_train_ds = gs_dataset.skip(val_size)
gs_val_ds = gs_dataset.take(val_size)

def configure_for_performance(ds):
    ds = ds.cache()
    ds = ds.shuffle(buffer_size=1000)
    ds = ds.batch(BATCH_SIZE)
    ds = ds.prefetch(buffer_size=AUTOTUNE)
    return ds

gs_train_ds = configure_for_performance(gs_train_ds)
gs_val_ds = configure_for_performance(gs_val_ds)

image_batch, illuminance_batch = next(iter(gs_train_ds))

#  plt.figure(figsize=(10, 10))

#  for i in range(9):
  #  ax = plt.subplot(3, 3, i + 1)
  #  print(image_batch[i])
  #  #  img_data=image_batch[i].numpy()*255.0
  #  #  plt.imshow(img_data.astype("uint8"))
  #  plt.imshow(image_batch[i].numpy().astype("uint8"))
  #  illuminance = illuminance_batch[i]
  #  plt.title(illuminance.numpy())
  #  plt.axis("off")

#  plt.show()

#  sys.exit()

model = tf.keras.Sequential([
  tf.keras.layers.Rescaling(1./255),
  tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(IMG_WIDTH, IMG_HEIGHT, 3)),
  tf.keras.layers.MaxPooling2D(2, 2),
  tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
  tf.keras.layers.MaxPooling2D(2, 2),
  tf.keras.layers.Conv2D(32, 3, activation='relu'),
  tf.keras.layers.MaxPooling2D(),
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dense(1)
])

model.compile(
  optimizer='adam',
  loss='mean_squared_error')

model.fit(
  gs_train_ds,
  validation_data=gs_val_ds,
  epochs=12
)

model.save("illu_v01")

執(zhí)行上述代碼,可以看到最后的 loss 和 val_loss 為:

? ./train_tf2_v2.py
2024-08-08 13:41:48.341117: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-08-08 13:41:48.342596: I external/local_tsl/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2024-08-08 13:41:48.363696: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-08-08 13:41:48.363729: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-08-08 13:41:48.364549: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-08-08 13:41:48.368601: I external/local_tsl/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2024-08-08 13:41:48.368762: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-08-08 13:41:48.801750: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2.15.0
whole img count=667
2024-08-08 13:41:51.138713: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:901] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2024-08-08 13:41:51.139135: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2256] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
< class 'tensorflow.python.data.ops.from_tensor_slices_op._TensorSliceDataset' >
< _TensorSliceDataset element_spec=(TensorSpec(shape=(), dtype=tf.string, name=None), TensorSpec(shape=(), dtype=tf.float32, name=None)) >
-------------------------------------------
Tensor("args_0:0", shape=(), dtype=string)
Epoch 1/12
17/17 [==============================] - 11s 603ms/step - loss: 98.9302 - val_loss: 0.1012
Epoch 2/12
17/17 [==============================] - 8s 495ms/step - loss: 0.0493 - val_loss: 0.0043
Epoch 3/12
17/17 [==============================] - 8s 481ms/step - loss: 0.0078 - val_loss: 0.0043
Epoch 4/12
17/17 [==============================] - 8s 479ms/step - loss: 0.0025 - val_loss: 0.0040
Epoch 5/12
17/17 [==============================] - 8s 477ms/step - loss: 0.0023 - val_loss: 0.0029
Epoch 6/12
17/17 [==============================] - 8s 480ms/step - loss: 0.0021 - val_loss: 0.0028
Epoch 7/12
17/17 [==============================] - 8s 482ms/step - loss: 0.0020 - val_loss: 0.0028
Epoch 8/12
17/17 [==============================] - 8s 482ms/step - loss: 0.0019 - val_loss: 0.0027
Epoch 9/12
17/17 [==============================] - 8s 482ms/step - loss: 0.0018 - val_loss: 0.0026
Epoch 10/12
17/17 [==============================] - 8s 485ms/step - loss: 0.0017 - val_loss: 0.0026
Epoch 11/12
17/17 [==============================] - 8s 485ms/step - loss: 0.0015 - val_loss: 0.0023
Epoch 12/12
17/17 [==============================] - 8s 484ms/step - loss: 0.0011 - val_loss: 0.0020

并且模型也保存在了 illu_v01 目錄。

? ls illu_v01/
assets  fingerprint.pb  keras_metadata.pb  saved_model.pb  variables

模型測(cè)試

現(xiàn)在有可模型,下面就是測(cè)試下自己的模型,使用下述 python 代碼在 PC 端進(jìn)行測(cè)試:

#!/usr/bin/python3.11

import numpy as np
import os
import PIL
import PIL.Image
import tensorflow as tf
import pathlib
import csv
import pandas as pd
import tensorflow.data
import sys
import matplotlib.pyplot as plt

IMG_WIDTH=255
IMG_HEIGHT=255

reload_model=tf.keras.models.load_model("illu_v01")
image_path=r'./JP/a0001-jmac_DSC1459.jpeg'
if len(sys.argv) < 2:
    print('Please input some pic to predict')
    sys.exit()
else:
    image_path=sys.argv[1]


image = tf.io.read_file(image_path)
image = tf.image.decode_jpeg(image, channels=3)
image = tf.image.resize(image, [IMG_WIDTH, IMG_HEIGHT])
image = tf.reshape(image, [1, IMG_WIDTH, IMG_HEIGHT, 3])

#  sys.exit()

predictions=reload_model.predict(image)
print(f'{image_path} ans={predictions*16777215}')

簡(jiǎn)單測(cè)試下模型:

check_tf2.py JP/a0001-jmac_DSC1459.jpeg
2024-08-08 13:57:08.263506: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-08-08 13:57:08.264895: I external/local_tsl/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2024-08-08 13:57:08.285614: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-08-08 13:57:08.285646: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-08-08 13:57:08.286510: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-08-08 13:57:08.290464: I external/local_tsl/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2024-08-08 13:57:08.290608: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-08-08 13:57:08.725843: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-08-08 13:57:11.051710: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:901] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2024-08-08 13:57:11.051982: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2256] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
1/1 [==============================] - 0s 57ms/step
JP/a0001-jmac_DSC1459.jpeg ans=[[check_tf2.py JP/a0001-jmac_DSC1459.jpeg
2024-08-08 13:57:08.263506: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-08-08 13:57:08.264895: I external/local_tsl/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2024-08-08 13:57:08.285614: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-08-08 13:57:08.285646: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-08-08 13:57:08.286510: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-08-08 13:57:08.290464: I external/local_tsl/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2024-08-08 13:57:08.290608: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-08-08 13:57:08.725843: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-08-08 13:57:11.051710: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:901] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2024-08-08 13:57:11.051982: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2256] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
1/1 [==============================] - 0s 57ms/step
JP/a0001-jmac_DSC1459.jpeg ans=[[5459503.]].]]

發(fā)現(xiàn)估計(jì)的光照度值是 5459503 和實(shí)際的 5363799 對(duì)比一下還是有15%左右的誤差。但是目前為止,整個(gè)模型訓(xùn)練測(cè)試流程已經(jīng)完成,下一步在是PC端模擬拉流使用模型對(duì)圖像進(jìn)行實(shí)時(shí)計(jì)算了,期待哦。

審核編輯 黃宇

聲明:本文內(nèi)容及配圖由入駐作者撰寫或者入駐合作網(wǎng)站授權(quán)轉(zhuǎn)載。文章觀點(diǎn)僅代表作者本人,不代表電子發(fā)燒友網(wǎng)立場(chǎng)。文章及其配圖僅供工程師學(xué)習(xí)之用,如有內(nèi)容侵權(quán)或者其他違規(guī)問題,請(qǐng)聯(lián)系本站處理。 舉報(bào)投訴
  • 算法
    +關(guān)注

    關(guān)注

    23

    文章

    4743

    瀏覽量

    96904
  • 模型
    +關(guān)注

    關(guān)注

    1

    文章

    3618

    瀏覽量

    51544
收藏 人收藏
加入交流群
微信小助手二維碼

掃碼添加小助手

加入工程師交流群

    評(píng)論

    相關(guān)推薦
    熱點(diǎn)推薦

    在Ubuntu20.04系統(tǒng)中訓(xùn)練神經(jīng)網(wǎng)絡(luò)模型的一些經(jīng)驗(yàn)

    , batch_size=512, epochs=20)總結(jié) 這個(gè)核心算法中的卷積神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)和訓(xùn)練過程,是用來對(duì)MNIST手寫數(shù)字圖像進(jìn)行分類的。模型
    發(fā)表于 10-22 07:03

    精準(zhǔn)捕捉環(huán)境信號(hào):二氧化碳 / 溫濕度 / 光照度四合一傳感器解析

    / 光照度四合一傳感器以 “集成化感知” 為核心,將四種關(guān)鍵環(huán)境參數(shù)的監(jiān)測(cè)功能整合于一體,成為連接物理環(huán)境與數(shù)字系統(tǒng)的 “神經(jīng)末梢”。它不僅簡(jiǎn)化了監(jiān)測(cè)鏈路,更通過多參數(shù)協(xié)同分析,為精準(zhǔn)調(diào)控提供了完整的數(shù)據(jù)支撐,重新定義了
    的頭像 發(fā)表于 09-15 17:03 ?432次閱讀

    【「DeepSeek 核心技術(shù)揭秘」閱讀體驗(yàn)】--全書概覽

    講解Deepseek的使用方法 第三章 深入剖析Deepseek-V3的模型架構(gòu)、訓(xùn)練框架、推理階段優(yōu)化、后訓(xùn)練優(yōu)化等關(guān)鍵技術(shù) 第四關(guān)于
    發(fā)表于 07-21 00:04

    【「DeepSeek 核心技術(shù)揭秘」閱讀體驗(yàn)】第三章:探索 DeepSeek - V3 技術(shù)架構(gòu)的奧秘

    一、模型架構(gòu) 在閱讀第三章關(guān)于 DeepSeek 的模型架構(gòu)部分時(shí),我仿佛打開了一扇通往人工智能核心構(gòu)造的大門。從架構(gòu)圖中,能清晰看到 Transformer 塊、前饋神經(jīng)網(wǎng)絡(luò)、注意力機(jī)制等模塊
    發(fā)表于 07-20 15:07

    二氧化碳溫濕度光照度傳感器:四合一,在線監(jiān)測(cè)氣體

    二氧化碳溫濕度光照度傳感器是一種多功能環(huán)境監(jiān)測(cè)設(shè)備,集成了測(cè)量二氧化碳濃度(CO?)、空氣溫度及光照強(qiáng)度(照度)四種關(guān)鍵環(huán)境參數(shù)的功能于一體。通過內(nèi)置的多類型傳感器元件,實(shí)現(xiàn)對(duì)環(huán)境條件的實(shí)時(shí)、同步
    的頭像 發(fā)表于 07-15 17:21 ?431次閱讀

    求助,關(guān)于CYW920829M2EVK-02使用AIROC藍(lán)牙測(cè)試和調(diào)試工具 1.3 測(cè)試 HCI 指令的問題求解

    第三章第五步,執(zhí)行double-在設(shè)備管理器中查看檢測(cè)到的端口名稱為\'HCI UART\'而不是文檔 描述 的\'Kitprog3 USB-UART\'、是驅(qū)動(dòng)問題導(dǎo)致的 hci通訊異常么?
    發(fā)表于 07-02 06:24

    初級(jí)工程師PCB設(shè)計(jì)技巧(PPT版)

    目錄第一:PCB 概述第二:PCB 設(shè)計(jì)流程及PCB Layout 設(shè)計(jì)第三章: PROTEL 常用操作第四: PCB Layout 技巧 獲取完整文檔資料可下載附件哦?。。。?
    發(fā)表于 06-26 15:50

    第三章 開發(fā)環(huán)境搭建

    本章介紹了W55MH32開發(fā)環(huán)境搭建。常用工具 KEIL MDK功能強(qiáng)大,安裝需注意路徑等要點(diǎn)并安裝芯片包。還講解MDK5使用技巧,如文本美化、語法檢測(cè)、代碼編輯等實(shí)用功能。
    的頭像 發(fā)表于 05-26 09:40 ?766次閱讀
    <b class='flag-5'>第三章</b> 開發(fā)環(huán)境搭建

    第三章 仿真器介紹

    本篇文章我們介紹了W55MH32板載的WIZ-Link仿真器的使用方法,讓我們一起插上USB線開始下載、調(diào)試程序吧!
    的頭像 發(fā)表于 05-21 11:54 ?601次閱讀
    <b class='flag-5'>第三章</b> 仿真器介紹

    第三章 警報(bào)聯(lián)動(dòng)】手把手教你玩轉(zhuǎn)新版正點(diǎn)原子云

    本帖最后由 jf_85110202 于 2025-3-13 14:43 編輯 【第三章 警報(bào)聯(lián)動(dòng)】手把手教你玩轉(zhuǎn)新版正點(diǎn)原子云 新版原子云網(wǎng)址:原子云(點(diǎn)擊登錄原子云) 原子云特色功能:設(shè)置
    發(fā)表于 03-12 16:05

    汽車電路初識(shí)

    第一 汽車電路分析基礎(chǔ)第二 汽車電路圖識(shí)讀第三章 典型汽車電器電路分析第四 汽車電路檢修第五 汽車電子電路圖識(shí)讀
    發(fā)表于 03-10 18:04 ?7次下載

    智能光照度傳感器:精準(zhǔn)測(cè)量,優(yōu)化光照環(huán)境

    在當(dāng)今科技日新月異的時(shí)代,智能設(shè)備已經(jīng)滲透到我們生活的方方面面,從智能家居到工業(yè)自動(dòng)化,再到現(xiàn)代農(nóng)業(yè),它們都在發(fā)揮著不可或缺的作用。而在這一系列的智能設(shè)備中,智能光照度傳感器以其獨(dú)特的功能和廣泛
    的頭像 發(fā)表于 03-10 08:39 ?537次閱讀

    第三章 干擾濾波技術(shù)

    文件過大,大家下載附件查看全文哦!
    發(fā)表于 03-04 14:13

    模型訓(xùn)練:開源數(shù)據(jù)與算法的機(jī)遇與挑戰(zhàn)分析

    進(jìn)行多方位的總結(jié)和梳理。 在第二《TOP 101-2024 大模型觀點(diǎn)》中,蘇州盛派網(wǎng)絡(luò)科技有限公司創(chuàng)始人兼首席架構(gòu)師蘇震巍分析了大模型訓(xùn)練過程中開源數(shù)據(jù)集和
    的頭像 發(fā)表于 02-20 10:40 ?894次閱讀
    大<b class='flag-5'>模型</b><b class='flag-5'>訓(xùn)練</b>:開源數(shù)據(jù)與<b class='flag-5'>算法</b>的機(jī)遇與挑戰(zhàn)分析

    基于差分卷積神經(jīng)網(wǎng)絡(luò)的低照度車牌圖像增強(qiáng)網(wǎng)絡(luò)

    網(wǎng)絡(luò),將車牌的紋理信息解耦為水平垂直和對(duì)角線兩個(gè)方向,對(duì)不同尺度空間的低照度圖像進(jìn)行紋理增強(qiáng)。為了避免增強(qiáng)結(jié)果局部過曝或低曝,該方法使用YCbCr顏色空間的損失函數(shù)來優(yōu)化模型圖像
    的頭像 發(fā)表于 11-11 10:29 ?1082次閱讀
    基于差分卷積神經(jīng)網(wǎng)絡(luò)的低<b class='flag-5'>照度</b>車牌<b class='flag-5'>圖像</b>增強(qiáng)網(wǎng)絡(luò)