That big table of descriptors is an unusual data object. It's normally mapped as writable from the CPU and readable from the GPU. The CPU side has to track which slots are in use. When the CPU is done with a texture slot, the GPU might not be done yet, so all deletes have to deferred until the end of the frame. This isn't inherently difficult but has to be designed in.
The usual curse of inefficient Vulkan is maxing out the main CPU thread long before the GPU is fully utilized. This is fixable. There can be multiple draw threads. Assets can be loaded into the GPU while drawing is in progress, using transfer queues and DMA. Except that for integrated memory GPUs, you don't have to copy from CPU to GPU at all. If you do all this, GPU utilization should reach 100% before CPU utilization does.
Except most of this stuff doesn't work on mobile or WebGPU yet. Portable code has way too many cases. Look at WGPU.
What you get by using Unity or Unreal Engine is that a big team already worked through all this overhead. Most of the open source renderers aren't there yet. Or weren't as of a year ago.
That being said the non indexable feature feature that remains is the pipeline itself. Some engines have 10s of thousands of shaders.
I once experimented with such approach. It generally works, but it's hard to do so - one need to use only a shading language for general game logic, debugging is practically non-existing.