Oh no! Where's the JavaScript?
Your Web browser does not have JavaScript enabled or does not support JavaScript. Please enable JavaScript on your Web browser to properly view this Web site, or upgrade to a Web browser that does support JavaScript.

AROS ABIv1 SMP.

Last updated on 5 hours ago
T
terminillsJunior Member
Posted 5 hours ago
Here's a screenshot of AROS ABIv1 SMP Running on a 128 Core Server(currently in QEMU). I've been sending patches over to kalamatee to review and in time expect AROS to be bootable on high end servers. Does it need to? Absolutely not however it will help with stability in the long term.
aha, retrofaza, deadwood
You do not have access to view attachments
J
Jeff1138Member
Posted 5 hours ago
Hi,

Interesting, over half the cores are at 100%, can you tell us what is running?
T
terminillsJunior Member
Posted 5 hours ago

Jeff1138 wrote:

@Jeff1138 - Hi,

Interesting, over half the cores are at 100%, can you tell us what is running?


I'm working on this.


000000f4e35723e3 0x000000000107ca80 | 000 | [LlamaCpp] Fatal signal handlers installed
000000f5381cbe3d 0x000000000107ca80 | 000 | [LlamaCpp][SMP] detected_cores=128 ggml_max_threads=512
000000f538f0e258 0x000000000107ca80 | 000 | [LlamaCpp][SMP] mode=strict-affinity (set LLAMACPP_AROS_STRICT_AFFINITY=0 for scheduler-managed fallback)
000000f53a2e3981 0x000000000107ca80 | 000 | [LlamaCpp][SMP] n_threads=4 strict=1 cpumask=0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,...
000000f53cbbbb5b 0x000000000107ca80 | 000 | [LlamaCpp][SMP] n_threads_batch=4 strict_batch=1 cpumask_batch=0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,...
000000f53f4922f0 0x000000000107ca80 | 000 | [LlamaCpp] CLI params parsed, applying AROS SMP defaults done
000000f54a517ac6 0x000000000107ca80 | 000 | [LlamaCpp] Serial log bridge installed
000000f54b0dc582 0x000000000107ca80 | 000 | [LlamaCpp] common_init complete, serial bridge active
000000f54beb6e6e 0x000000000107ca80 | 000 | [LlamaCpp] Compute HIDD headers missing; building CPU-only path
000000f54cf47a17 0x000000000107ca80 | 000 | [LlamaCpp] entering common_init_from_params
000000f54da10df9 0x000000000107ca80 | 000 | [LlamaCpp] llama_params_fit_impl: getting device memory data for initial parameters:
000000f635d59d1a 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: loaded meta data with 20 key-value pairs and 57 tensors from LlamaCpp-Models/stories15M-q8_0.gguf (version GGUF V3 (latest))
000000f637c5b54c 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
000000f63b80c082 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 0: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
000000f63fcd76ee 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 1: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
000000f643048cfd 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 2: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
000000f6449d4efb 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 3: tokenizer.ggml.model str = llama
000000f645d07479 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 4: general.architecture str = llama
000000f647361ac6 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 5: general.name str = llama
000000f6486e8bee 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 6: tokenizer.ggml.unknown_token_id u32 = 0
000000f64993bcf6 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 7: tokenizer.ggml.bos_token_id u32 = 1
000000f64b0cb1be 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 8: tokenizer.ggml.eos_token_id u32 = 2
000000f64c7da503 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 9: tokenizer.ggml.seperator_token_id u32 = 4294967295
000000f64dd4f0e6 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 10: tokenizer.ggml.padding_token_id u32 = 4294967295
000000f64f33f0f4 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 11: llama.context_length u32 = 128
000000f65057a057 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 12: llama.embedding_length u32 = 288
000000f65193c6b7 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 13: llama.feed_forward_length u32 = 768
000000f652d6e5c2 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 14: llama.attention.head_count u32 = 6
000000f654269a7b 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 15: llama.block_count u32 = 6
000000f6557552c3 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 16: llama.rope.dimension_count u32 = 48
000000f656a01228 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 17: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
000000f658143f67 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 18: general.quantization_version u32 = 2
000000f6596e90ec 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 19: general.file_type u32 = 7
000000f65a955e42 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - type f32: 13 tensors
000000f65b808ad9 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - type q8_0: 44 tensors
000000f65c6d4db0 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: mmap is not supported on this platform
000000f65d6743a1 0x000000000107ca80 | 000 | [LlamaCpp] print_info: file format = GGUF V3 (latest)
000000f65e3b321f 0x000000000107ca80 | 000 | [LlamaCpp] print_info: file type = Q8_0
000000f65efa2508 0x000000000107ca80 | 000 | [LlamaCpp] print_info: file size = 24.74 MiB (8.50 BPW)
000000f66256d31d 0x000000000107ca80 | 000 | [LlamaCpp] init_tokenizer: initializing tokenizer for type 1
000000f6632397ea 0x000000000107ca80 | 000 | [LlamaCpp] load: bad special token: 'tokenizer.ggml.seperator_token_id' = 4294967295, using default id -1
000000f6642fee72 0x000000000107ca80 | 000 | [LlamaCpp] load: bad special token: 'tokenizer.ggml.padding_token_id' = 4294967295, using default id -1
000000f6678ad253 0x000000000107ca80 | 000 | [LlamaCpp] load: 0 unused tokens
000000f668ba4730 0x000000000107ca80 | 000 | [LlamaCpp] load: control token: 1 '<s>' is not marked as EOG
000000f669df0ddf 0x000000000107ca80 | 000 | [LlamaCpp] load: printing all EOG tokens:
000000f66a8ab509 0x000000000107ca80 | 000 | [LlamaCpp] load: - 2 ('</s>'Wink
000000f66b49da3d 0x000000000107ca80 | 000 | [LlamaCpp] load: special tokens cache size = 3
000000f66e2ed58c 0x000000000107ca80 | 000 | [LlamaCpp] load: token to piece cache size = 0.1684 MB
000000f66f07da62 0x000000000107ca80 | 000 | [LlamaCpp] print_info: arch = llama
000000f66fe0bb2f 0x000000000107ca80 | 000 | [LlamaCpp] print_info: vocab_only = 0
000000f670b9edb9 0x000000000107ca80 | 000 | [LlamaCpp] print_info: no_alloc = 1
000000f6719f57b4 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_ctx_train = 128
000000f67264cd50 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_embd = 288
000000f673398cfc 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_embd_inp = 288
000000f673fc7e74 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_layer = 6
000000f674cae4d5 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_head = 6
000000f67560bc67 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_head_kv = 6
000000f675f5c988 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_rot = 48
000000f6768d3d9a 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_swa = 0
000000f677360fab 0x000000000107ca80 | 000 | [LlamaCpp] print_info: is_swa_any = 0
000000f6780ba968 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_embd_head_k = 48
000000f678ef88db 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_embd_head_v = 48
000000f679bcc481 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_gqa = 1
000000f67a8bab1b 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_embd_k_gqa = 288
000000f67b61b8f5 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_embd_v_gqa = 288
000000f67c36046b 0x000000000107ca80 | 000 | [LlamaCpp] print_info: f_norm_eps = 0.0e+00
000000f67cf4fd17 0x000000000107ca80 | 000 | [LlamaCpp] print_info: f_norm_rms_eps = 1.0e-05
000000f67ddc1e3b 0x000000000107ca80 | 000 | [LlamaCpp] print_info: f_clamp_kqv = 0.0e+00
000000f67ed12516 0x000000000107ca80 | 000 | [LlamaCpp] print_info: f_max_alibi_bias = 0.0e+00
000000f67fc12552 0x000000000107ca80 | 000 | [LlamaCpp] print_info: f_logit_scale = 0.0e+00
000000f680945a76 0x000000000107ca80 | 000 | [LlamaCpp] print_info: f_attn_scale = 0.0e+00
000000f6816c703c 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_ff = 768
000000f68236ea9c 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_expert = 0
000000f6830c03bc 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_expert_used = 0
000000f683b794d4 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_expert_groups = 0
000000f6846c1d3c 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_group_used = 0
000000f685292184 0x000000000107ca80 | 000 | [LlamaCpp] print_info: causal attn = 1
000000f685d147ce 0x000000000107ca80 | 000 | [LlamaCpp] print_info: pooling type = 0
000000f68689c0e7 0x000000000107ca80 | 000 | [LlamaCpp] print_info: rope type = 0
000000f6874abadb 0x000000000107ca80 | 000 | [LlamaCpp] print_info: rope scaling = linear
000000f6881766c1 0x000000000107ca80 | 000 | [LlamaCpp] print_info: freq_base_train = 10000.0
000000f689024b2d 0x000000000107ca80 | 000 | [LlamaCpp] print_info: freq_scale_train = 1
000000f689d59938 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_ctx_orig_yarn = 128
000000f68aa73fba 0x000000000107ca80 | 000 | [LlamaCpp] print_info: rope_yarn_log_mul = 0.0000
000000f68b7cdaa3 0x000000000107ca80 | 000 | [LlamaCpp] print_info: rope_finetuned = unknown
000000f68cc0fffc 0x000000000107ca80 | 000 | [LlamaCpp] print_info: model type = ?B
000000f68d914671 0x000000000107ca80 | 000 | [LlamaCpp] print_info: model params = 24.41 M
000000f68e86c40c 0x000000000107ca80 | 000 | [LlamaCpp] print_info: general.name = llama
000000f68f62e63f 0x000000000107ca80 | 000 | [LlamaCpp] print_info: vocab type = SPM
000000f690448acc 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_vocab = 32000
000000f691438649 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_merges = 0
000000f6920a2dda 0x000000000107ca80 | 000 | [LlamaCpp] print_info: BOS token = 1 '<s>'
000000f692e37e46 0x000000000107ca80 | 000 | [LlamaCpp] print_info: EOS token = 2 '</s>'
000000f693b5a678 0x000000000107ca80 | 000 | [LlamaCpp] print_info: UNK token = 0 '<unk>'
000000f6947cc640 0x000000000107ca80 | 000 | [LlamaCpp] print_info: LF token = 13 '<0x0A>'
000000f69552d81b 0x000000000107ca80 | 000 | [LlamaCpp] print_info: EOG token = 2 '</s>'
000000f696423b9a 0x000000000107ca80 | 000 | [LlamaCpp] print_info: max token length = 48
000000f69718c39f 0x000000000107ca80 | 000 | [LlamaCpp] load_tensors: loading model tensors, this can take a while... (mmap = false, direct_io = false)
000000f6987063c9 0x000000000107ca80 | 000 | [LlamaCpp] load_tensors: layer 0 assigned to device CPU, is_swa = 0
000000f69973f89e 0x000000000107ca80 | 000 | [LlamaCpp] load_tensors: layer 1 assigned to device CPU, is_swa = 0
000000f69a8eddcf 0x000000000107ca80 | 000 | [LlamaCpp] load_tensors: layer 2 assigned to device CPU, is_swa = 0
000000f69b73ba7b 0x000000000107ca80 | 000 | [LlamaCpp] load_tensors: layer 3 assigned to device CPU, is_swa = 0
000000f69c5c5da8 0x000000000107ca80 | 000 | [LlamaCpp] load_tensors: layer 4 assigned to device CPU, is_swa = 0
000000f69d580004 0x000000000107ca80 | 000 | [LlamaCpp] load_tensors: layer 5 assigned to device CPU, is_swa = 0
000000f69e4a45cb 0x000000000107ca80 | 000 | [LlamaCpp] load_tensors: layer 6 assigned to device CPU, is_swa = 0
000000f69f388051 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor token_embd.weight
000000f6a0353080 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor output_norm.weight
000000f6a1126f5e 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor output.weight
000000f6a1e86ec4 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.0.attn_norm.weight
000000f6a2d49d12 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.0.attn_q.weight
000000f6a3c9a86b 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.0.attn_k.weight
000000f6a49a102d 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.0.attn_v.weight
000000f6a57502aa 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.0.attn_output.weight
000000f6a6510665 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.0.ffn_norm.weight
000000f6a74eec26 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.0.ffn_gate.weight
000000f6a82f95a0 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.0.ffn_down.weight
000000f6a92c71a8 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.0.ffn_up.weight
000000f6aa1496f4 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.1.attn_norm.weight
000000f6aafe590b 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.1.attn_q.weight
000000f6abe25903 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.1.attn_k.weight
000000f6acc7ecfc 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.1.attn_v.weight
000000f6ada357c9 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.1.attn_output.weight
000000f6ae8f03d1 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.1.ffn_norm.weight
000000f6af8246b8 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.1.ffn_gate.weight
000000f6b0542dae 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.1.ffn_down.weight
000000f6b1923cec 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.1.ffn_up.weight
000000f6b26cdaf0 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.2.attn_norm.weight
000000f6b366d320 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.2.attn_q.weight
000000f6b44b2f48 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.2.attn_k.weight
000000f6b530f3fc 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.2.attn_v.weight
000000f6b5f580fa 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.2.attn_output.weight
000000f6b6dfe401 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.2.ffn_norm.weight
000000f6b7d9d02e 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.2.ffn_gate.weight
000000f6b8b2bdf8 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.2.ffn_down.weight
000000f6b99f4155 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.2.ffn_up.weight
000000f6ba7caf90 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.3.attn_norm.weight
000000f6bb717386 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.3.attn_q.weight
000000f6bc7a7b47 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.3.attn_k.weight
000000f6bd77d68e 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.3.attn_v.weight
000000f6be75baf1 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.3.attn_output.weight
000000f6bf419063 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.3.ffn_norm.weight
000000f6c024cc8e 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.3.ffn_gate.weight
000000f6c0fe0b1b 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.3.ffn_down.weight
000000f6c1eb63f2 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.3.ffn_up.weight
000000f6c2d0ef53 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.4.attn_norm.weight
000000f6c3b9b819 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.4.attn_q.weight
000000f6c4afdaf0 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.4.attn_k.weight
000000f6c5a25541 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.4.attn_v.weight
000000f6c68b1d26 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.4.attn_output.weight
000000f6c77bcd2a 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.4.ffn_norm.weight
000000f6c86abb47 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.4.ffn_gate.weight
000000f6c954306a 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.4.ffn_down.weight
000000f6ca52ec86 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.4.ffn_up.weight
000000f6cb490247 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.5.attn_norm.weight
000000f6cc36bf9b 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.5.attn_q.weight
000000f6cd249fb3 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.5.attn_k.weight
000000f6ce14c957 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.5.attn_v.weight
000000f6cf5eb7a5 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.5.attn_output.weight
000000f6d05a650a 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.5.ffn_norm.weight
000000f6d14ebfec 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.5.ffn_gate.weight
000000f6d226c275 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.5.ffn_down.weight
000000f6d313a2b1 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.5.ffn_up.weight
000000f6d3e898c2 0x000000000107ca80 | 000 | [LlamaCpp] load_tensors: CPU model buffer size = 0.00 MiB
000000f6dec648f7 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: constructing llama_context
000000f6e399fb58 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: n_seq_max = 1
000000f6e460a71c 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: n_ctx = 256
000000f6e525161f 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: n_ctx_seq = 256
000000f6e5ef83cd 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: n_batch = 256
000000f6e6c140de 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: n_ubatch = 256
000000f6e79eeadd 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: causal_attn = 1
000000f6e85fcaf0 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: flash_attn = auto
000000f6e936b01f 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: kv_unified = false
000000f6ea0522b5 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: freq_base = 10000.0
000000f6eac9d4e8 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: freq_scale = 1
000000f6eb90aaf5 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: n_ctx_seq (256) > n_ctx_train (128) -- possible training context overflow
000000f6eccf8e04 0x000000000107ca80 | 000 | [LlamaCpp] set_abort_callback: call
000000f6ed977310 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: CPU output buffer size = 0.12 MiB
000000f6ee94c461 0x000000000107ca80 | 000 | [LlamaCpp] llama_kv_cache: layer 0: dev = CPU
000000f6ef736608 0x000000000107ca80 | 000 | [LlamaCpp] llama_kv_cache: layer 1: dev = CPU
000000f6f06d3453 0x000000000107ca80 | 000 | [LlamaCpp] llama_kv_cache: layer 2: dev = CPU
000000f6f155ae95 0x000000000107ca80 | 000 | [LlamaCpp] llama_kv_cache: layer 3: dev = CPU
000000f6f2233afe 0x000000000107ca80 | 000 | [LlamaCpp] llama_kv_cache: layer 4: dev = CPU
000000f6f2fa7e38 0x000000000107ca80 | 000 | [LlamaCpp] llama_kv_cache: layer 5: dev = CPU
000000f6f3e52230 0x000000000107ca80 | 000 | [LlamaCpp] llama_kv_cache: CPU KV buffer size = 0.00 MiB
000000f6f4f77bc8 0x000000000107ca80 | 000 | [LlamaCpp] llama_kv_cache: size = 1.69 MiB ( 256 cells, 6 layers, 1/1 seqs), K (f16): 0.84 MiB, V (f16): 0.84 MiB
000000f6f8d5a7c4 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: enumerating backends
000000f6f98e05e9 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: backend_ptrs.size() = 1
000000f6fa8291ea 0x000000000107ca80 | 000 | [LlamaCpp] sched_reserve: reserving ...
000000f6fb5f4842 0x000000000107ca80 | 000 | [LlamaCpp] sched_reserve: max_nodes = 1024
000000f702e677bb 0x000000000107ca80 | 000 | [LlamaCpp] sched_reserve: reserving full memory module
000000f703dd153a 0x000000000107ca80 | 000 | [LlamaCpp] sched_reserve: worst-case: n_tokens = 256, n_seqs = 1, n_outputs = 1
000000f705198a37 0x000000000107ca80 | 000 | [LlamaCpp] graph_reserve: reserving a graph for ubatch with n_tokens = 1, n_seqs = 1, n_outputs = 1
000000f70cfc10cb 0x000000000107ca80 | 000 | [LlamaCpp] sched_reserve: Flash Attention was auto, set to enabled
000000f70de23ec5 0x000000000107ca80 | 000 | [LlamaCpp] graph_reserve: reserving a graph for ubatch with n_tokens = 256, n_seqs = 1, n_outputs = 256
000000f7179fcfcc 0x000000000107ca80 | 000 | [LlamaCpp] graph_reserve: reserving a graph for ubatch with n_tokens = 1, n_seqs = 1, n_outputs = 1
000000f7228107b7 0x000000000107ca80 | 000 | [LlamaCpp] graph_reserve: reserving a graph for ubatch with n_tokens = 256, n_seqs = 1, n_outputs = 256
000000f72eb05e66 0x000000000107ca80 | 000 | [LlamaCpp] sched_reserve: CPU compute buffer size = 33.04 MiB
000000f72fbf643c 0x000000000107ca80 | 000 | [LlamaCpp] sched_reserve: graph nodes = 193
000000f730780915 0x000000000107ca80 | 000 | [LlamaCpp] sched_reserve: graph splits = 1
000000f731216267 0x000000000107ca80 | 000 | [LlamaCpp] sched_reserve: reserve took 360.70 ms, sched copies = 1
000000f731f50581 0x000000000107ca80 | 000 | [LlamaCpp] llama_memory_breakdown_print: | memory breakdown [MiB] | total free self model context compute unaccounted |
000000f7332044d4 0x000000000107ca80 | 000 | [LlamaCpp] llama_memory_breakdown_print: | - Host | 59 = 24 + 1 + 33 |
000000f736a5654e 0x000000000107ca80 | 000 | [LlamaCpp] llama_params_fit_impl: no devices with dedicated memory found
000000f73793cf31 0x000000000107ca80 | 000 | [LlamaCpp] llama_params_fit: successfully fit params to free device memory
000000f73949f1ca 0x000000000107ca80 | 000 | [LlamaCpp] llama_params_fit: fitting params to free memory took 1.87 seconds
000000f81ad8889f 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: loaded meta data with 20 key-value pairs and 57 tensors from LlamaCpp-Models/stories15M-q8_0.gguf (version GGUF V3 (latest))
000000f81c7a9747 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
000000f8203dfea0 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 0: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
000000f8245122ab 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 1: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
000000f826756399 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 2: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
000000f827fbf2b4 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 3: tokenizer.ggml.model str = llama
000000f82947d83b 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 4: general.architecture str = llama
000000f82a6a1e63 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 5: general.name str = llama
000000f82c16da26 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 6: tokenizer.ggml.unknown_token_id u32 = 0
000000f82d4c425a 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 7: tokenizer.ggml.bos_token_id u32 = 1
000000f82e8374db 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 8: tokenizer.ggml.eos_token_id u32 = 2
000000f82fc14c88 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 9: tokenizer.ggml.seperator_token_id u32 = 4294967295
000000f830ff2a43 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 10: tokenizer.ggml.padding_token_id u32 = 4294967295
000000f83241f4ee 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 11: llama.context_length u32 = 128
000000f833860ea8 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 12: llama.embedding_length u32 = 288
000000f834cbdd7e 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 13: llama.feed_forward_length u32 = 768
000000f835e1e0fa 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 14: llama.attention.head_count u32 = 6
000000f837207b16 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 15: llama.block_count u32 = 6
000000f838524e65 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 16: llama.rope.dimension_count u32 = 48
000000f83989c209 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 17: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
000000f83aeefd99 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 18: general.quantization_version u32 = 2
000000f83c0beacf 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - kv 19: general.file_type u32 = 7
000000f83d316c36 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - type f32: 13 tensors
000000f83de3aa7c 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: - type q8_0: 44 tensors
000000f83eb850ab 0x000000000107ca80 | 000 | [LlamaCpp] llama_model_loader: mmap is not supported on this platform
000000f83fa2b71d 0x000000000107ca80 | 000 | [LlamaCpp] print_info: file format = GGUF V3 (latest)
000000f8407d475c 0x000000000107ca80 | 000 | [LlamaCpp] print_info: file type = Q8_0
000000f84119e710 0x000000000107ca80 | 000 | [LlamaCpp] print_info: file size = 24.74 MiB (8.50 BPW)
000000f8448067e4 0x000000000107ca80 | 000 | [LlamaCpp] init_tokenizer: initializing tokenizer for type 1
000000f8454fe064 0x000000000107ca80 | 000 | [LlamaCpp] load: bad special token: 'tokenizer.ggml.seperator_token_id' = 4294967295, using default id -1
000000f8468df772 0x000000000107ca80 | 000 | [LlamaCpp] load: bad special token: 'tokenizer.ggml.padding_token_id' = 4294967295, using default id -1
000000f849da3a80 0x000000000107ca80 | 000 | [LlamaCpp] load: 0 unused tokens
000000f84b1f0b23 0x000000000107ca80 | 000 | [LlamaCpp] load: control token: 1 '<s>' is not marked as EOG
000000f84c5c9672 0x000000000107ca80 | 000 | [LlamaCpp] load: printing all EOG tokens:
000000f84d1481f1 0x000000000107ca80 | 000 | [LlamaCpp] load: - 2 ('</s>'Wink
000000f84dc72eaa 0x000000000107ca80 | 000 | [LlamaCpp] load: special tokens cache size = 3
000000f850718e11 0x000000000107ca80 | 000 | [LlamaCpp] load: token to piece cache size = 0.1684 MB
000000f85131b7ea 0x000000000107ca80 | 000 | [LlamaCpp] print_info: arch = llama
000000f851fd29d9 0x000000000107ca80 | 000 | [LlamaCpp] print_info: vocab_only = 0
000000f852bf6080 0x000000000107ca80 | 000 | [LlamaCpp] print_info: no_alloc = 0
000000f85391f770 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_ctx_train = 128
000000f854a3463c 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_embd = 288
000000f855883aee 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_embd_inp = 288
000000f85639e58c 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_layer = 6
000000f856f62206 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_head = 6
000000f857d061cd 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_head_kv = 6
000000f858ae618a 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_rot = 48
000000f859700d53 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_swa = 0
000000f85a2ee034 0x000000000107ca80 | 000 | [LlamaCpp] print_info: is_swa_any = 0
000000f85b09d059 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_embd_head_k = 48
000000f85bc680f0 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_embd_head_v = 48
000000f85c79d8c1 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_gqa = 1
000000f85d28bdcd 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_embd_k_gqa = 288
000000f85de62edf 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_embd_v_gqa = 288
000000f85e9c569c 0x000000000107ca80 | 000 | [LlamaCpp] print_info: f_norm_eps = 0.0e+00
000000f85f77dd0c 0x000000000107ca80 | 000 | [LlamaCpp] print_info: f_norm_rms_eps = 1.0e-05
000000f8604b4287 0x000000000107ca80 | 000 | [LlamaCpp] print_info: f_clamp_kqv = 0.0e+00
000000f861185d5d 0x000000000107ca80 | 000 | [LlamaCpp] print_info: f_max_alibi_bias = 0.0e+00
000000f861d791b4 0x000000000107ca80 | 000 | [LlamaCpp] print_info: f_logit_scale = 0.0e+00
000000f862769b47 0x000000000107ca80 | 000 | [LlamaCpp] print_info: f_attn_scale = 0.0e+00
000000f86329f205 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_ff = 768
000000f863eb42b1 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_expert = 0
000000f864a3658f 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_expert_used = 0
000000f865531525 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_expert_groups = 0
000000f86629e4f1 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_group_used = 0
000000f867064b80 0x000000000107ca80 | 000 | [LlamaCpp] print_info: causal attn = 1
000000f867caf340 0x000000000107ca80 | 000 | [LlamaCpp] print_info: pooling type = 0
000000f8688ba8bf 0x000000000107ca80 | 000 | [LlamaCpp] print_info: rope type = 0
000000f8693dd74c 0x000000000107ca80 | 000 | [LlamaCpp] print_info: rope scaling = linear
000000f86a15207b 0x000000000107ca80 | 000 | [LlamaCpp] print_info: freq_base_train = 10000.0
000000f86adfcf1a 0x000000000107ca80 | 000 | [LlamaCpp] print_info: freq_scale_train = 1
000000f86b98af53 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_ctx_orig_yarn = 128
000000f86c55e6e0 0x000000000107ca80 | 000 | [LlamaCpp] print_info: rope_yarn_log_mul = 0.0000
000000f86d1a94ae 0x000000000107ca80 | 000 | [LlamaCpp] print_info: rope_finetuned = unknown
000000f86de77e97 0x000000000107ca80 | 000 | [LlamaCpp] print_info: model type = ?B
000000f86eb51c7b 0x000000000107ca80 | 000 | [LlamaCpp] print_info: model params = 24.41 M
000000f86fa7d0b5 0x000000000107ca80 | 000 | [LlamaCpp] print_info: general.name = llama
000000f870857235 0x000000000107ca80 | 000 | [LlamaCpp] print_info: vocab type = SPM
000000f871494fcf 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_vocab = 32000
000000f87239a3bc 0x000000000107ca80 | 000 | [LlamaCpp] print_info: n_merges = 0
000000f873012626 0x000000000107ca80 | 000 | [LlamaCpp] print_info: BOS token = 1 '<s>'
000000f873e3880e 0x000000000107ca80 | 000 | [LlamaCpp] print_info: EOS token = 2 '</s>'
000000f874dab4d0 0x000000000107ca80 | 000 | [LlamaCpp] print_info: UNK token = 0 '<unk>'
000000f8763b2a35 0x000000000107ca80 | 000 | [LlamaCpp] print_info: LF token = 13 '<0x0A>'
000000f8770eb2bf 0x000000000107ca80 | 000 | [LlamaCpp] print_info: EOG token = 2 '</s>'
000000f877d4deae 0x000000000107ca80 | 000 | [LlamaCpp] print_info: max token length = 48
000000f87888d9c7 0x000000000107ca80 | 000 | [LlamaCpp] load_tensors: loading model tensors, this can take a while... (mmap = false, direct_io = false)
000000f879afad12 0x000000000107ca80 | 000 | [LlamaCpp] load_tensors: layer 0 assigned to device CPU, is_swa = 0
000000f87ac8596a 0x000000000107ca80 | 000 | [LlamaCpp] load_tensors: layer 1 assigned to device CPU, is_swa = 0
000000f87bb242dc 0x000000000107ca80 | 000 | [LlamaCpp] load_tensors: layer 2 assigned to device CPU, is_swa = 0
000000f87cae6c2e 0x000000000107ca80 | 000 | [LlamaCpp] load_tensors: layer 3 assigned to device CPU, is_swa = 0
000000f87da8e0c8 0x000000000107ca80 | 000 | [LlamaCpp] load_tensors: layer 4 assigned to device CPU, is_swa = 0
000000f87e9c9849 0x000000000107ca80 | 000 | [LlamaCpp] load_tensors: layer 5 assigned to device CPU, is_swa = 0
000000f87fa72051 0x000000000107ca80 | 000 | [LlamaCpp] load_tensors: layer 6 assigned to device CPU, is_swa = 0
000000f880a0d1e6 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor token_embd.weight
000000f8815a3f23 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor output_norm.weight
000000f882429bce 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor output.weight
000000f88337a452 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.0.attn_norm.weight
000000f8842518cc 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.0.attn_q.weight
000000f884ff21fc 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.0.attn_k.weight
000000f885f80137 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.0.attn_v.weight
000000f886e1da8c 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.0.attn_output.weight
000000f887cfdead 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.0.ffn_norm.weight
000000f888b0aa3c 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.0.ffn_gate.weight
000000f889ad4585 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.0.ffn_down.weight
000000f88aa7ddab 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.0.ffn_up.weight
000000f88b9bb1b0 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.1.attn_norm.weight
000000f88c75e2b8 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.1.attn_q.weight
000000f88d443627 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.1.attn_k.weight
000000f88e1c866c 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.1.attn_v.weight
000000f88ef770b5 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.1.attn_output.weight
000000f88fe7b809 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.1.ffn_norm.weight
000000f890cb46a0 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.1.ffn_gate.weight
000000f891de159a 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.1.ffn_down.weight
000000f892bd92fb 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.1.ffn_up.weight
000000f8939fd05d 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.2.attn_norm.weight
000000f894988eaf 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.2.attn_q.weight
000000f8959555ea 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.2.attn_k.weight
000000f896962320 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.2.attn_v.weight
000000f8979172de 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.2.attn_output.weight
000000f89863e12e 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.2.ffn_norm.weight
000000f8998cfe9b 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.2.ffn_gate.weight
000000f89a6c1c2f 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.2.ffn_down.weight
000000f89ba61235 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.2.ffn_up.weight
000000f89c8731c0 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.3.attn_norm.weight
000000f89d665242 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.3.attn_q.weight
000000f89e4f9d4e 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.3.attn_k.weight
000000f89f4383e1 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.3.attn_v.weight
000000f8a02245db 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.3.attn_output.weight
000000f8a115bc9d 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.3.ffn_norm.weight
000000f8a1f947e2 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.3.ffn_gate.weight
000000f8a2df593f 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.3.ffn_down.weight
000000f8a3cc7ed1 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.3.ffn_up.weight
000000f8a49f1a3f 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.4.attn_norm.weight
000000f8a58b459f 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.4.attn_q.weight
000000f8a684f013 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.4.attn_k.weight
000000f8a7750ac6 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.4.attn_v.weight
000000f8a85b717d 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.4.attn_output.weight
000000f8a9482482 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.4.ffn_norm.weight
000000f8aa1beb41 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.4.ffn_gate.weight
000000f8ab0d65e8 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.4.ffn_down.weight
000000f8abf7cb60 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.4.ffn_up.weight
000000f8acce496f 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.5.attn_norm.weight
000000f8adb1324d 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.5.attn_q.weight
000000f8ae668206 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.5.attn_k.weight
000000f8af417212 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.5.attn_v.weight
000000f8b024c931 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.5.attn_output.weight
000000f8b1092158 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.5.ffn_norm.weight
000000f8b1ea57ef 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.5.ffn_gate.weight
000000f8b2c59ae4 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.5.ffn_down.weight
000000f8b3bb9db3 0x000000000107ca80 | 000 | [LlamaCpp] create_tensor: loading tensor blk.5.ffn_up.weight
000000f8b4b1c8d7 0x000000000107ca80 | 000 | [LlamaCpp] load_tensors: CPU model buffer size = 24.74 MiB
000000f8b5b79397 0x000000000107ca80 | 000 | [LlamaCpp] load_all_data: no device found for buffer type CPU for async uploads
000000fffe4d3aec 0x000000000107ca80 | 000 | [LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp] .[LlamaCpp]
0000010b8c137818 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: constructing llama_context
0000010b8d4675be 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: n_seq_max = 1
0000010b8e0c34ca 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: n_ctx = 256
0000010b8ec9abd1 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: n_ctx_seq = 256
0000010b8f7e0879 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: n_batch = 256
0000010b90666317 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: n_ubatch = 256
0000010b912273b8 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: causal_attn = 1
0000010b91c865d9 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: flash_attn = auto
0000010b928890f7 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: kv_unified = false
0000010b9356a870 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: freq_base = 10000.0
0000010b9439e65d 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: freq_scale = 1
0000010b9505c70a 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: n_ctx_seq (256) > n_ctx_train (128) -- possible training context overflow
0000010b9656f679 0x000000000107ca80 | 000 | [LlamaCpp] set_abort_callback: call
0000010b9713cb32 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: CPU output buffer size = 0.12 MiB
0000010b981f6766 0x000000000107ca80 | 000 | [LlamaCpp] llama_kv_cache: layer 0: dev = CPU
0000010b98d71992 0x000000000107ca80 | 000 | [LlamaCpp] llama_kv_cache: layer 1: dev = CPU
0000010b99883907 0x000000000107ca80 | 000 | [LlamaCpp] llama_kv_cache: layer 2: dev = CPU
0000010b9a36942f 0x000000000107ca80 | 000 | [LlamaCpp] llama_kv_cache: layer 3: dev = CPU
0000010b9ae771d2 0x000000000107ca80 | 000 | [LlamaCpp] llama_kv_cache: layer 4: dev = CPU
0000010b9b94019a 0x000000000107ca80 | 000 | [LlamaCpp] llama_kv_cache: layer 5: dev = CPU
0000010b9c475312 0x000000000107ca80 | 000 | [LlamaCpp] llama_kv_cache: CPU KV buffer size = 1.69 MiB
0000010b9d197aae 0x000000000107ca80 | 000 | [LlamaCpp] llama_kv_cache: size = 1.69 MiB ( 256 cells, 6 layers, 1/1 seqs), K (f16): 0.84 MiB, V (f16): 0.84 MiB
0000010b9efe3710 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: enumerating backends
0000010b9fc7cc6f 0x000000000107ca80 | 000 | [LlamaCpp] llama_context: backend_ptrs.size() = 1
0000010ba09b24bb 0x000000000107ca80 | 000 | [LlamaCpp] sched_reserve: reserving ...
0000010ba1a4dca8 0x000000000107ca80 | 000 | [LlamaCpp] sched_reserve: max_nodes = 1024
0000010ba77134bf 0x000000000107ca80 | 000 | [LlamaCpp] sched_reserve: reserving full memory module
0000010ba854d409 0x000000000107ca80 | 000 | [LlamaCpp] sched_reserve: worst-case: n_tokens = 256, n_seqs = 1, n_outputs = 1
0000010ba95f4424 0x000000000107ca80 | 000 | [LlamaCpp] graph_reserve: reserving a graph for ubatch with n_tokens = 1, n_seqs = 1, n_outputs = 1
0000010bb29e8e82 0x000000000107ca80 | 000 | [LlamaCpp] sched_reserve: Flash Attention was auto, set to enabled
0000010bb3c14142 0x000000000107ca80 | 000 | [LlamaCpp] graph_reserve: reserving a graph for ubatch with n_tokens = 256, n_seqs = 1, n_outputs = 256
0000010bbd916990 0x000000000107ca80 | 000 | [LlamaCpp] graph_reserve: reserving a graph for ubatch with n_tokens = 1, n_seqs = 1, n_outputs = 1
0000010bc46d0567 0x000000000107ca80 | 000 | [LlamaCpp] graph_reserve: reserving a graph for ubatch with n_tokens = 256, n_seqs = 1, n_outputs = 256
0000010bcfd42485 0x000000000107ca80 | 000 | [LlamaCpp] sched_reserve: CPU compute buffer size = 33.04 MiB
0000010bd0d10317 0x000000000107ca80 | 000 | [LlamaCpp] sched_reserve: graph nodes = 193
0000010bd1986f3d 0x000000000107ca80 | 000 | [LlamaCpp] sched_reserve: graph splits = 1
0000010bd2491b2b 0x000000000107ca80 | 000 | [LlamaCpp] sched_reserve: reserve took 328.21 ms, sched copies = 1
0000010bd32d11aa 0x000000000107ca80 | 000 | [LlamaCpp] set_adapters_lora: adapters = 0000000000000000
0000010bd41caff3 0x000000000107ca80 | 000 | [LlamaCpp] adapters_lora_are_same: adapters = 0000000000000000
0000010bd5962490 0x000000000107ca80 | 000 | [LlamaCpp] set_warmup: value = 1
0000010bf63f093c 0x000000000107ca80 | 000 | [LlamaCpp] [ggml][AROS-SMP] threadpool-create n_threads=4 strict=1 poll=50
0000010bf7651ba8 0x000000000107ca80 | 000 | [LlamaCpp] [ggml][AROS-SMP] threadpool-map ith=0 mask_first=3 mask_valid=1
0000010bf8ab37d6 0x000000000107ca80 | 000 | [LlamaCpp] [ggml][AROS-SMP] threadpool-map ith=1 mask_first=0 mask_valid=1
0000010bfa5221c1 0x000000000107ca80 | 000 | [LlamaCpp] [ggml][AROS-SMP] threadpool-map ith=2 mask_first=1 mask_valid=1
0000010bfb6e37ed 0x000000000107ca80 | 000 | [LlamaCpp] [ggml][AROS-SMP] threadpool-map ith=3 mask_first=2 mask_valid=1
0000010bfd9e460c 0x00000000010294e0 | 000 | [LlamaCpp] [ggml][AROS-SMP] worker-affinity-pre ith=1 target_cpu=0 mask_valid=1 enabled=1
0000010bfececa7e 0x00000000010294e0 | 000 | [LlamaCpp] [ggml][AROS-SMP] worker-affinity-post ith=1
0000010bffb82cc8 0x00000000010294e0 | 000 | [LlamaCpp] [ggml][AROS-SMP] worker-start ith=1 target_cpu=0 mask_valid=1
0000010c2739cb80 0x0000000001080120 | 000 | [LlamaCpp] [ggml][AROS-SMP] worker-affinity-pre ith=2 target_cpu=1 mask_valid=1 enabled=1
0000010c286eabc4 0x0000000001080120 | 000 | [LlamaCpp] [ggml][AROS-SMP] worker-affinity-post ith=2
0000010c2948bcdd 0x0000000001080120 | 000 | [LlamaCpp] [ggml][AROS-SMP] worker-start ith=2 target_cpu=1 mask_valid=1
0000010c50c44467 0x0000000001080430 | 000 | [LlamaCpp] [ggml][AROS-SMP] worker-affinity-pre ith=3 target_cpu=2 mask_valid=1 enabled=1
0000010c52112a15 0x0000000001080430 | 000 | [LlamaCpp] [ggml][AROS-SMP] worker-affinity-post ith=3
0000010c5302d496 0x0000000001080430 | 000 | [LlamaCpp] [ggml][AROS-SMP] worker-start ith=3 target_cpu=2 mask_valid=1
0000010c7e0ed6da 0x000000000107ca80 | 000 | [LlamaCpp] [ggml][AROS-SMP] kickoff #0 n_threads=4 pause=0
0000010c7fdca9ce 0x000000000107ca80 | 000 | [LlamaCpp] [ggml][AROS-SMP] first-compute ith=0 target_cpu=3 cplan_threads=4
You can view all discussion threads in this forum.
You cannot start a new discussion thread in this forum.
You cannot reply in this discussion thread.
You cannot start on a poll in this forum.
You cannot upload attachments in this forum.
You cannot download attachments in this forum.
Users who participated in discussion: terminills, Jeff1138