You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
venv "D:\AI\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.9.3
Commit hash: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
Launching Web UI with arguments: --lowvram --precision full --no-half --skip-torch-cuda-test
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Loading weights [6ce0161689] from D:\AI\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: D:\AI\stable-diffusion-webui\configs\v1-inference.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set`share=True`in`launch()`.D:\AI\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. warnings.warn(Startup time: 11.7s (prepare environment: 0.3s, import torch: 4.6s, import gradio: 1.3s, setup paths: 1.4s, initialize shared: 0.5s, other imports: 0.7s, load scripts: 1.1s, create ui: 0.9s, gradio launch: 0.7s).Applying attention optimization: Doggettx... done.Model loaded in 5.5s (load weights from disk: 1.1s, create model: 0.6s, apply weights to model: 3.3s, calculate empty prompt: 0.4s). 10%|████████▎ | 2/20 [00:06<00:58, 3.22s/it]Exception in thread MemMon:█▋ | 2/20 [00:02<00:23, 1.28s/it]Traceback (most recent call last): File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\threading.py", line 1016, in _bootstrap_innerself.run() File "D:\AI\stable-diffusion-webui\modules\memmon.py", line 53, in run free, total = self.cuda_mem_get_info() File "D:\AI\stable-diffusion-webui\modules\memmon.py", line 34, in cuda_mem_get_inforeturn torch.cuda.mem_get_info(index) File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 663, in mem_get_inforeturntorch.cuda.cudart().cudaMemGetInfo(device)RuntimeError: CUDA error: an illegal instruction was encounteredCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.For debugging consider passing CUDA_LAUNCH_BLOCKING=1.Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.*** Error completing request*** Arguments: ('task(rxe7j3hmflc31l0)', <gradio.routes.Request object at 0x000002B1642C5A20>, 'hello', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {} Traceback (most recent call last): File "D:\AI\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "D:\AI\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(*args, **kwargs) File "D:\AI\stable-diffusion-webui\modules\txt2img.py", line 109, in txt2img processed = processing.process_images(p) File "D:\AI\stable-diffusion-webui\modules\processing.py", line 845, in process_images res = process_images_inner(p) File "D:\AI\stable-diffusion-webui\modules\processing.py", line 981, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "D:\AI\stable-diffusion-webui\modules\processing.py", line 1328, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "D:\AI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 218, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "D:\AI\stable-diffusion-webui\modules\sd_samplers_common.py", line 272, in launch_samplingreturnfunc() File "D:\AI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 218, in<lambda> samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_contextreturn func(*args, **kwargs) File "D:\AI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_implreturn self._call_impl(*args, **kwargs) File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_implreturn forward_call(*args, **kwargs) File "D:\AI\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 237, in forward x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in)) File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_implreturn self._call_impl(*args, **kwargs) File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_implreturn forward_call(*args, **kwargs) File "D:\AI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "D:\AI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_epsreturn self.inner_model.apply_model(*args, **kwargs) File "D:\AI\stable-diffusion-webui\modules\sd_hijack_utils.py", line 18, in<lambda> setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "D:\AI\stable-diffusion-webui\modules\sd_hijack_utils.py", line 32, in __call__return self.__orig_func(*args, **kwargs) File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, **cond) File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_implreturn self._call_impl(*args, **kwargs) File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_implreturn forward_call(*args, **kwargs) File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward out = self.diffusion_model(x, t, context=cc) File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_implreturn self._call_impl(*args, **kwargs) File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_implreturn forward_call(*args, **kwargs) File "D:\AI\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forwardreturn original_forward(self, x, timesteps, context, *args, **kwargs) File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward h = module(h, emb, context) File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_implreturn self._call_impl(*args, **kwargs) File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1568, in _call_impl result = forward_call(*args, **kwargs) File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward x = layer(x, context) File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_implreturn self._call_impl(*args, **kwargs) File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_implreturn forward_call(*args, **kwargs) File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward x = block(x, context=context[i]) File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_implreturn self._call_impl(*args, **kwargs) File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_implreturn forward_call(*args, **kwargs) File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forwardreturn checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpointreturn CheckpointFunction.apply(func, len(inputs), *args) File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 539, in applyreturnsuper().apply(*args, **kwargs) # type: ignore[misc] File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward output_tensors = ctx.run_function(*ctx.input_tensors) File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 273, in _forward x = self.attn2(self.norm2(x), context=context) + x File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_implreturn self._call_impl(*args, **kwargs) File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_implreturn forward_call(*args, **kwargs) File "D:\AI\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 240, in split_cross_attention_forward q, k, v = (rearrange(t, 'b n (h d) -> (b h) n d', h=h) fortin (q_in, k_in, v_in)) File "D:\AI\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 240, in<genexpr> q, k, v = (rearrange(t, 'b n (h d) -> (b h) n d', h=h) fortin (q_in, k_in, v_in)) File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\einops\einops.py", line 487, in rearrangereturn reduce(tensor, pattern, reduction='rearrange', **axes_lengths) File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\einops\einops.py", line 410, in reducereturn _apply_recipe(recipe, tensor, reduction_type=reduction) File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\einops\einops.py", line 239, in _apply_recipereturn backend.reshape(tensor, final_shapes) File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\einops\_backends.py", line 84, in reshapereturn x.reshape(shape) RuntimeError: CUDA error: an illegal instruction was encountered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.---Traceback (most recent call last): File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_syncreturn await get_asynclib().run_sync_in_worker_thread( File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_threadreturn await future File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run result = context.run(func, *args) File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(*args, **kwargs) File "D:\AI\stable-diffusion-webui\modules\call_queue.py", line 77, in fdevices.torch_gc() File "D:\AI\stable-diffusion-webui\modules\devices.py", line 81, in torch_gctorch.cuda.empty_cache() File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 159, in empty_cachetorch._C._cuda_emptyCache()RuntimeError: CUDA error: an illegal instruction was encounteredCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.For debugging consider passing CUDA_LAUNCH_BLOCKING=1.Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Additional information
I updated to the latest driver of NVIDIA.
I tried installing Fooocus but only becuase 1111 stopped working.
I have ooba gooba installed in my system. Maybe the CUDA version there is different?
Could it possibly clash with 1111?
The text was updated successfully, but these errors were encountered:
Checklist
What happened?
The Generation stops and display this error
CUDA error: an illegal instruction was encountered
Complete log given below.
Steps to reproduce the problem
Occurs while generating the image after clicking the generate button.
What should have happened?
Should have generated the output. It goes upto 10 percent or so then quits.
What browsers do you use to access the UI ?
No response
Sysinfo
sysinfo-2024-05-13-05-11.json
Console logs
Additional information
I updated to the latest driver of NVIDIA.
I tried installing Fooocus but only becuase 1111 stopped working.
I have ooba gooba installed in my system. Maybe the CUDA version there is different?
Could it possibly clash with 1111?
The text was updated successfully, but these errors were encountered: