The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

HeadsNet

The basic concept is to train a FNN/MLP on vertex data of 3D heads so that it can then re-produce random 3D heads.

This dataset uses the thispersondoesnotexist_to_triposr_6748_3D_Heads dataset as a foundation.

The heads dataset was collecting using the scraper Dataset_Scraper.7z based on TripoSR which converts the 2D images from ThisPersonDoesNotExist into 3D meshes. (using this marching cubes improvement by thatname/zephyr)

Vertex Normals need to be generated before we can work with this dataset, the easiest method to achieve this is with a simple Blender script:

import bpy
import glob
import pathlib
from os import mkdir
from os.path import isdir
importDir = "ply/"
outputDir = "ply_norm/"
if not isdir(outputDir): mkdir(outputDir)

for file in glob.glob(importDir + "*.ply"):
    model_name = pathlib.Path(file).stem
    if pathlib.Path(outputDir+model_name+'.ply').is_file() == True: continue
    bpy.ops.wm.ply_import(filepath=file)
    bpy.ops.wm.ply_export(
                            filepath=outputDir+model_name+'.ply',
                            filter_glob='*.ply',
                            check_existing=False,
                            ascii_format=False,
                            export_selected_objects=False,
                            apply_modifiers=True,
                            export_triangulated_mesh=True,
                            export_normals=True,
                            export_uv=False,
                            export_colors='SRGB',
                            global_scale=1.0,
                            forward_axis='Y',
                            up_axis='Z'
                        )
    bpy.ops.object.select_all(action='SELECT')
    bpy.ops.object.delete(use_global=False)
    bpy.ops.outliner.orphans_purge()
    bpy.ops.outliner.orphans_purge()
    bpy.ops.outliner.orphans_purge()

Importing the PLY without normals causes Blender to automatically generate them.

At this point the PLY files now need to be converted to training data, for this I wrote a C program DatasetGen_2_6.7z using RPLY to load the PLY files and convert them to binary data which I have provided here HeadsNet-2-6.7z.

It's always good to NaN check your training data after generating it so I have provided a simple Python script for that here nan_check.py.

This binary training data can be loaded into Python using Numpy:

load_x = []
  with open("train_x.dat", 'rb') as f:
    load_x = np.fromfile(f, dtype=np.float32)

load_y = []
  with open("train_y.dat", 'rb') as f:
    load_y = np.fromfile(f, dtype=np.float32)

The data can then be reshaped and saved back out as a numpy array which makes for faster loading:

inputsize = 2
outputsize = 6
training_samples = 632847695
train_x = np.reshape(load_x, [training_samples, inputsize])
train_y = np.reshape(load_y, [training_samples, outputsize])
np.save("train_x.npy", train_x)
np.save("train_y.npy", train_y)

632,847,695 samples, each sample is 2 components for train_x (random seed & 0-1 unit sphere position index) and 6 components for train_y (vertex position [x,y,z] & vertex color [r,g,b]).

The basic premise of how this network is trained and thus how the dataset is generated in the C program is:

  1. All models are pre-scaled to a normal cubic scale and then scaled again by 0.55 so that they all fit within a unit sphere.
  2. All model vertices are reverse traced from the vertex position to the perimeter of the unit sphere using the vertex normal as the direction.
  3. The nearest position on a 10,242 vertex icosphere is found and the network is trained to output the model vertex position and vertex color (6 components) at the index of the icosphere vertex.
  4. The icosphere vertex index is scaled to a 0-1 range before being input to the network.
  5. The network only has two input parameters, the other parameter is a 0-1 model ID which is randomly selected and all vertices for a specific model are trained into the network using the randomly selected ID. This ID does not change per-vertex it only changes per 3D model.
  6. The ID allows the user to use this parameter as a sort of hyper-parameter for the random seed: to generate a random Head using this network you would input a random 0-1 seed and then iterate the icosphere index parameter to some sample range between 0-1 so if you wanted a 20,000 vertex head you would iterate between 0-1 at 20,000 increments of 0.00005 as the network outputs one vertex position and vertex color for each forward-pass.
  • 1st input parameter = random seed
  • 2nd input parameter = icosphere index

More about this type of network topology can be read here: https://gist.github.com/mrbid/1eacdd9d9239b2d324a3fa88591ff852

Improvements

  • Future networks will have 3 additional input parameters one for each x,y,z of a unit vector for the ray direction from the icosphere index.
  • The unit vector used to train the network will just be the vertex normal from the 3D model but inverted.
  • When performing inference more forward-passes would need to be performed as some density of rays in a 30° or similar cone angle pointing to 0,0,0 would need to be performed per icosphere index position.
  • This could result in higher quality outputs; at the cost of an order of magnitude more forward-pass iterations.

Updates

  • A new dataset has been generated HeadsNet-2-6_v2.7z, the old one uses a 10,242 vertex unit icosphere and the new one uses a 655,362 vertex unit icosphere, this should lead to a higher quality network. Start training with it instantly using HeadsNet_v2_Trainer_with_Dataset.7z.
  • The system didn't work out, here I have trained models of various qualities: HeadsNet_Trained_Models.7z. The network has some potential, with a better refined dataset and better network topology it could prove more successful.
  • Added HeadsNet3 using the FaceTo3D dataset a pivot where I attempted to train an FNN/MLP on a 1024 component input vector of a 32x32 grayscale image of a face to output a 32x32x32 grayscale voxel volume of a 3D head. Results where not overwhelmingly positive. Had higher hopes.
  • Added HeadsNet4 this version uses 32^3 micro MLP's with a single voxel grayscale output per network, the trained datasetd are included and the program to generate them but you will need to download the PLY files to generate the dataset from the FaceTo3D repository.
  • This is being continued with more success over at GitHub: https://github.com/mrbid/FaceTo3D
Downloads last month
95