iks-ran
  • Home
  • Archives
  • Categories
  • Tags
  • About
DreamFusion

DreamFusion

Motivation Applying diffusion models to other modalities has been successful, but requires large amounts of modality-specific training data. 3D assetsare currently designed by hand in modeling softwar
2023-08-14
Papers
#AIGC #Computer Vision
Neural Radiance Fields

Neural Radiance Fields

3D Shape RepresentationsExplicit RepresentationThe description of a scene is explicit, and the 3D representation of the scene can be seen directly, such as mesh, point cloud, voxel and volume which ca
2023-08-11
Papers
#AIGC #Computer Vision #Neural Radiance Fields
networksetup:macOS上的终端网络配置管理工具

networksetup:macOS上的终端网络配置管理工具

networksetup:macOS上的终端网络配置管理工具什么是 networksetupnetworksetup 是macOS自带的,作为 macOS 操作系统的一部分提供给用户的网络配置管理工具, 用户能在终端通过 networksetup 。管理网络设置和配置。 它提供了用于配置网络首选项、接口、代理、DNS 设置等的各种功能。 基本使用1networksetup -help 1234
2023-08-10
Others
#macOS
Denoising Diffusion Probabilistic Models

Denoising Diffusion Probabilistic Models

Basics of ProbabilityConditional Probability \begin{aligned} P(A, B, C) & = P(C|A, B)P(A, B) = P(C|A, B)P(B|A)P(A) \\ P(B, C|A) & = \frac{P(A, B, C)}{P(A)} = P(C|A, B)P(B
2023-08-07
Papers
#Deep Learning #Generative Model
Generative Adverserial Networks

Generative Adverserial Networks

Architecture Fig.1 GAN Architecture The GAN architecture is illustrated in Fig.1 . There are two pieces in GAN architecture —— a generator and a discriminator. The generator is used to gene
2023-07-28
Papers
#Deep Learning #Generative Model
Vision Transformer

Vision Transformer

Vision TransformerInductive biasVision Transformer has much less image-specific inductive bias than CNNs. In CNNs, locality, two-dimensional neighborhood structure, and translation equivariance are ba
2023-07-27
Papers
#Computer Vision #Deep Learning
BERT:Bidirectional Encoder Representations from Transformers

BERT:Bidirectional Encoder Representations from Transformers

Input/Output Representations Fig.1 Input Represention For handling a variety of down-stream tasks, the input representation of BERT is able to unambiguously represent both a single sen
2023-07-26
Papers
#Deep Learning #Natural Language Processing
Transformer

Transformer

TransformerModel Architecture Fig.1 Transformer Transformer was firsr used for machine translation with an an encoder-decoder structure, see Fig.1. The input (source) and output (target) se
2023-07-19
Papers
#Deep Learning #Natural Language Processing
Tokenization And Embedding

Tokenization And Embedding

Tokenization And EmbeddingTokenization Fig.1 Tokenization Encode input sequence (usually setence in NLP) with tokenizers to sequence of integers. For example, 1234>>> inputs = &quo
2023-07-16
Python
#Deep Learning #Natural Language Processing
Data Parallel And Distributed Data Parallel

Data Parallel And Distributed Data Parallel

Data Parallel And Distributed Data ParallelData ParallelProcedure Fig.1 Data Parallel 12345678While iterating: 1. Scatter batch to mini-batch and distribute them across given GPUs 2.
2023-07-14
Python
#Deep Learning
1234

Search

Hexo Fluid