Thinkphp 调用PHP文件实例化类 必须反斜线

<?php
namespace app\portal\service;

include "/home/wwwroot/test.test.org.cn/vendor/pdf/fpdf.php";
class PdfService
{
    public function TitleList($title, $files){
        $pdf = new \FPDF();
        $pdf->AddPage();
        $pdf->SetFont('Arial', 'B', 16);
        $pdf->Cell(0, 10, 'Hello World!', 0, 1, 'C');
        $pdf->Output('/home/wwwroot/test.test.org.cn/public/example.pdf', 'F');

    }
}

导入PHP文件 include "/home/wwwroot/test.test.org.cn/vendor/pdf/fpdf.php";

$pdf = new \FPDF(); 实例化类 必须反斜线

MLX 将LORA数据合并生产GGUF

之间的命令可以将LORA导出safetensors文件,这个文件可以直接倒入LM studio,但是OLLAMA 不支持,之前需要安装LLAMA.CPP来将其转化为GGUF格式,但是今天发现MLX自带转换GGUF功能

mlx_lm.fuse \
    --model ../../qwen2.5-0.5B \
    --adapter-path adapters \
    --save-path qwen2.5-0.5B-test_1 \
    --export-gguf

上面命令将导出F16 精度的GGUF文件,默认情况下,GGUF模型会被保存在 fused_model/ggml-model-f16.gguf,但您可以通过 --gguf-path 选项来指定文件名。
只支持float16。,导出的精度只能是flaot16,不能改变精度,无需加任何参数。

#关于--help的打印信息
mlx_lm.fuse --help
Loading pretrained model
usage: mlx_lm.fuse [-h] [--model MODEL] [--save-path SAVE_PATH]
                   [--adapter-path ADAPTER_PATH] [--hf-path HF_PATH]
                   [--upload-repo UPLOAD_REPO] [--de-quantize] [--export-gguf]
                   [--gguf-path GGUF_PATH]

Fuse fine-tuned adapters into the base model.

options:
  -h, --help            show this help message and exit
  --model MODEL         The path to the local model directory or Hugging Face
                        repo.
  --save-path SAVE_PATH
                        The path to save the fused model.
  --adapter-path ADAPTER_PATH
                        Path to the trained adapter weights and config.
  --hf-path HF_PATH     Path to the original Hugging Face model. Required for
                        upload if --model is a local directory.
  --upload-repo UPLOAD_REPO
                        The Hugging Face repo to upload the model to.
  --de-quantize         Generate a de-quantized model.
  --export-gguf         Export model weights in GGUF format.
  --gguf-path GGUF_PATH
                        Path to save the exported GGUF format model weights.
                        Default is ggml-model-f16.gguf.

QWEN不支持MLX转GGUF,只能LLAMA.CPP

MLX转GGUF,只有:GGUF 转换支持限于 fp16 精度的 Mistral、Mixtral 和 Llama 风格的模型

Qwen2.5-1.5B MLX LORA微调

下载模型:
export HF_ENDPOINT=https://hf-mirror.com

huggingface-cli download –resume-download Qwen/Qwen2.5-1.5B-Instruct –local-dir qwen2.5-1.5B

训练数据

{"prompt": "什么狼不吃羊?", "completion": "黄鼠狼"}
{"prompt": "什么人怕太阳?", "completion": "雪人"}
{"prompt": "什么时候做的事别人看不到?", "completion": "梦里做事"}
{"prompt": "农夫养了10头牛,为什么只有19只角?", "completion": "因为一只是犀牛"}
{"prompt": "什么东西没脚走天下?", "completion": "船"}
{"prompt": "什么桥下没水?", "completion": "立交桥"}
{"prompt": "小呆骑在大牛身上,为什么大牛不吃草?", "completion": "大牛是人"}
{"prompt": "在什么时候1加2不等于3?", "completion": "在算错了的时候。"}
{"prompt": "什么山没有石?", "completion": "冰山"}
{"prompt": "为什么天上下雨地面一点不湿?", "completion": "下的是流星雨。"}
{"prompt": "蚊子咬在什么地方你不会觉得痒?", "completion": "别人身上"}
{"prompt": "什么东西只有一只脚,却能跑遍屋子的所有角落?", "completion": "扫帚"}
{"prompt": "最好吃的饭是什么饭?", "completion": "软饭"}
{"prompt": "太阳和月亮在一起是哪一天?", "completion": "明天"}

OLLAMA modelfile 设置

https://ollama.com/library/qwen2.5:3b 可以设置为modelfile

modelfile ollama

FROM /Users/may/new_xxxxx.gguf
#导入gguf文件 下面的必须设置

TEMPLATE """{{- if .Messages }}
{{- if or .System .Tools }}<|im_start|>system
{{- if .System }}
{{ .System }}
{{- end }}
{{- if .Tools }}

# Tools

You may call one or more functions to assist with the user query.

You are provided with function signatures within <tools></tools> XML tags:
<tools>
{{- range .Tools }}
{"type": "function", "function": {{ .Function }}}
{{- end }}
</tools>

For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
<tool_call>
{"name": <function-name>, "arguments": <args-json-object>}
</tool_call>
{{- end }}<|im_end|>
{{ end }}
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 -}}
{{- if eq .Role "user" }}<|im_start|>user
{{ .Content }}<|im_end|>
{{ else if eq .Role "assistant" }}<|im_start|>assistant
{{ if .Content }}{{ .Content }}
{{- else if .ToolCalls }}<tool_call>
{{ range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}}
{{ end }}</tool_call>
{{- end }}{{ if not $last }}<|im_end|>
{{ end }}
{{- else if eq .Role "tool" }}<|im_start|>user
<tool_response>
{{ .Content }}
</tool_response><|im_end|>
{{ end }}
{{- if and (ne .Role "assistant") $last }}<|im_start|>assistant
{{ end }}
{{- end }}
{{- else }}
{{- if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ end }}{{ .Response }}{{ if .Response }}<|im_end|>{{ end }}"""

SYSTEM "你是一个友好的人工智能助手,回答问题时要简洁明了。"

LLAMA.CPP转换GGUF

python convert_hf_to_gguf.py /Users/may/mlx/mlx-examples/lora/qwen2.5-0.5B-test_1 --outtype bf16  --outfile ../qwen.gguf