inexorgame / vulkan-renderer Goto Github PK
View Code? Open in Web Editor NEWA new 3D game engine for Linux and Windows using C++20 and Vulkan API 1.3, in very early but ongoing development
Home Page: https://inexor.org
License: MIT License
A new 3D game engine for Linux and Windows using C++20 and Vulkan API 1.3, in very early but ongoing development
Home Page: https://inexor.org
License: MIT License
The Vulkan spec states:
Pipeline cache objects allow the result of pipeline construction to be reused between pipelines and between runs of an application. Reuse between pipelines is achieved by passing the same pipeline cache object when creating multiple related pipelines. Reuse across runs of an application is achieved by retrieving pipeline cache contents in one run of an application, saving the contents, and using them to preinitialize a pipeline cache on a subsequent run. The contents of the pipeline cache objects are managed by the implementation. Applications can manage the host memory consumed by a pipeline cache object and control the amount of data retrieved from a pipeline cache object.
-no_separate_data_queue
Most GPUs will have a distinct data transfer queue, which can (and should!) be used to transfer data from CPU to GPU. When this command line argument is specified, the distinct data transfer queue will not be used. Instead, the program tries to find any other queue which has VK_QUEUE_TRANSFER_BIT
. This is very likely a graphics queue.
This is highly advised by many references, especially Tips and Tricks: Vulkan Dos and Don’ts by NVidia.
Looking at Sascha Willems example code, it look like this needs some more planing ahead.
The memory mapping that we use currently is not as fast as using a staging buffer as pointed out by the Vulkan Tutorial. It is much faster to use a staging buffer.
If multiple GPU are available, it is often difficult to let the program decide which one to use. It is already possible to specify a preferred GPU using -GPU <index>
as command line argument.
To finalize the GPU selection mechanism, we must handle the case when 2 graphics cards are available and one of them is VK_PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU
while the other is VK_PHYSICAL_DEVICE_TYPE_DISCRETE_GPU
. In this case, we prefer the "real" graphics card (VK_PHYSICAL_DEVICE_TYPE_DISCRETE_GPU
).
If multiple VK_PHYSICAL_DEVICE_TYPE_DISCRETE_GPU
are available, we sort out the unsuitable ones using VulkanSettingsDecisionMaker::is_graphics_card_is_suitable
. After this, we rank the resulting graphics card by their memory size.
function(assign_source_group)
Is two times included. I think with the usage of parameters this can be reduced.
Implement the program like this:
while(true)
{
VkResult result = renderer.init();
if(VK_SUCCESS == result)
{
renderer.run();
renderer.calculate_memory_budget();
renderer.cleanup();
spdlog::debug("Window closed.");
}
else
{
// Something did go wrong when initialising the engine!
vulkan_error_check(result);
return -1;
}
}
And see if the program can be re-initialised.
If we want to compile shaders using glslangValidator.exe
, we must make sure vertex shaders end with .vert
and fragment shaders end with .frag
. The current batch script is broken.
There's something wrong with the current render output:
The input color of the vertices is not respected. Everything is green.
The Vulkan debug callback gets the following message:
[ UNASSIGNED-CoreValidation-Shader-OutputNotConsumed ] Object: 0x731f0f000000000a (Type = 15) | vertex shader writes to output location 0.0 which is not consumed by fragment shader
Something seems to be wrong with the fragment shader.
There is an example code which can be used as a reference:
https://renderdoc.org/vulkan-in-30-minutes.html
Bundle everything that is related to device queues to a manager class called VulkanQueueManager
.
Tests and benchmarks are broken.
Also, they are not even fully implemented yet.
This will be very useful for the first tech demo!
When writing/reading data, it is important to use std::mutex
to avoid race conditions!
The code in is wrong!
This code protects update_entry
from simultaneous writing.
bool update_entry(const std::string& type_name, const std::shared_ptr<T> new_type)
{
if(!does_type_exist(type_name))
{
return false;
}
// Use lock guard to ensure thread safety.
std::lock_guard<std::mutex> lock(type_manager_lock);
// Update the entry.
stored_types[type_name] = new_type;
return true;
}
But we can generate a race condition by reading the data here:
std::optional<std::shared_ptr<T>> get_entry(const std::string& type_name)
{
if(does_key_exist(type_name))
{
// No mutex required as this is a read only operation.
return stored_types[type_name];
}
return std::nullopt;
}
This also affects the entity-system. I need to fix!
Define who's responsibility it is to handle errors.
Use spdlog as standard output.
When shutting down Vulkan resources, Vulkan Memory Allocator does the following:
if(g_hCommandPool != VK_NULL_HANDLE)
{
vkDestroyCommandPool(g_hDevice, g_hCommandPool, g_Allocs);
g_hCommandPool = VK_NULL_HANDLE;
}
Do the same in vulkan-renderer.
Implement using the swapchain cache:
swapchain_create_info.oldSwapchain = VK_NULL_HANDLE;
in
VulkanInitialisation::create_swapchain()
void test_class::initialise(const VkDevice& device)
{
this->device = device;
}
Check if other formats than the standard format are supported on the machine.
const VkFormat image_format = VK_FORMAT_B8G8R8A8_UNORM;
We can use one memory allocation for both buffers (buffer size = size of vertices + size of indices). This has several advantages considering the management (residency) of the buffer's memory. We can use the offset settings to define the beginning of the index buffer after the vertex buffer beginning part.
Use the official Vulkan specification to check if every function call in the current code is valid.
class Sample
{
Sampler()
{
}
~Sampler()
{
}
}
can be written as
class Sample
{
Sampler() = default;
~Sampler() = default;
}
https://en.cppreference.com/w/cpp/thread/thread/hardware_concurrency
This method is only a hint. It might return 0 if the result could not be determined.
Let's define a default value of 8 working threads in that case.
#define INEXOR_THREADPOOL_BACKUP_CPU_CORE_COUNT 8
We can change this as we want. Worst case, we create more threads than there are cpu cores available, generating overhead.
This is a very rare case I guess but still we should account for that!
When the application starts, I can see the following:
Everything is working correctly as long as I don't change the window resolution or just minimize/reopen the window from the task bar. As soon as I start resizing the window, I see the texture getting corrupted:
With increasing window size, the corruption gets more severe:
When I maximize the window, I can see this:
RenderDoc tells me that the texture's image memory or image sampler is corrupted:
Use Sascha Willems code examples and Vulkan Memory Allocator's code example as reference.
Add imgui to the vulkan-renderer.
Run it in a separate thread.
One of the big problems with glfw3 is it's C-style API design. This topic has been elaborated in this stackoverflow post. If we want to create an input callback, we cannot use class methods for this, as C does not know about these. We have to use a global static input callback:
static void keyboard_input_callback_reloader(GLFWwindow* window, int key, int scancode, int action, int mods)
{
auto app = reinterpret_cast<InexorApplication*>(glfwGetWindowUserPointer(window));
app->keyboard_input_callback(window, key, scancode, action, mods);
}
// Store the current InexorApplication instance in the GLFW window user pointer.
// Since GLFW is a C-style API, we can't use a class method as callback for window resize!
// TODO: Refactor! Don't use callback functions! use manual polling in the render loop instead.
glfwSetWindowUserPointer(window, this);
// Setup callback for window resize.
// Since GLFW is a C-style API, we can't use a class method as callback for window resize!
glfwSetFramebufferSizeCallback(window, frame_buffer_resize_callback);
This is such an unnecessary ugly hack. I don't like it. Not at all. We will be using glfwGetKey
instead.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.