围棋记谱机软件解决方案

保存时间:2026/4/2 18:18:18


围棋记谱机软件解决方案
内容由豆包 AI 生成
我听说市面上现在有一种所谓的叫做围棋记谱机。所谓记谱机就是在比赛过程中有一个类似计时钟,计时钟你每次下完一步棋你就按一下那个计时钟,它就记下每个人的用时然后它这个新功能是在记下你的用时之后,又会把现在的棋盘的棋谱拍下来,转换成所谓的围棋的这个棋谱格式,所谓 SGF 就国际通用的这种围棋棋谱格式那这种产品现在已经开始面试了,上市了。那么你设计一下,这个软件方面要怎么去解决呢?我的想法是一个单片机,或者说是一个树莓派上面跑一个 Linux 最简单的操作系统,然后接一个 webcam 就是一个摄像头,然后这个摄像头有一个最简单的这个人脸识别,或者说这种物体识别,就把这个围棋的这个棋盘做一个识别,然后转换成这个棋谱。那么这么有,这有没有现成的开源软件,能不能快速做一个 prototype 出来呢?
可以利用一些开源软件和工具来实现围棋记谱机的功能,以下是相关方案:

基于树莓派的软件设计

  • 系统设置:在树莓派上安装 Raspbian 等轻量级 Linux 操作系统。安装 OpenCV 库,用于图像处理和计算机视觉任务,如棋盘和棋子的识别。安装 Python 的相关库,如 NumPy 用于数值计算,Pillow 用于图像的基本处理。
  • 棋谱转换:使用 sgfmill 等 Python 库来创建和处理 SGF 格式的棋谱文件,将识别到的棋局信息转换为 SGF 格式并保存。
  • 界面设计:可以使用 TkinterPyQt 等库创建简单的图形界面,用于显示当前棋局、操作按钮等。

棋盘与棋子识别

  • 棋盘检测:利用 OpenCV 的 findChessboardCorners 函数来检测棋盘的角点,从而确定棋盘的位置和大小。如果棋盘是有颜色的,可以先进行灰度化、高斯模糊、边缘检测等预处理操作,提高检测精度。
  • 棋子识别:对于落在交叉点上的棋子,可以在检测到棋盘角点后,将棋盘划分为一个个小区域,通过分析每个区域内的颜色、像素值等特征来判断是否有棋子以及棋子的颜色。对于圆形棋子,可使用 HoughCircles 函数检测圆形棋子的位置和半径,再根据颜色判断棋子的颜色。

现有开源软件参考

  • igoki:该开源项目采用 Java 语言开发,基于计算机视觉和图像处理技术,能通过摄像头捕捉实体棋盘上的棋子位置,实时识别并记录棋局,生成 SGF 文件。它具有多平台兼容性,可在 Windows、Linux 和 OS X 系统上运行。
  • Leela Zero:这是一款开源的围棋 AI 软件,主要用于研究和作为高水平对弈对手,但它的部分代码和技术可能对棋谱分析和处理有一定的参考价值。
利用上述开源软件和技术,再结合树莓派、摄像头等硬件,能够快速制作出一个围棋记谱机的原型。在制作过程中,可根据实际情况对软件进行调整和优化,以满足具体的需求。
相关视频
参考 11 篇资料
你能不能用 OpenCV 编一个最简单的这样一个棋盘棋谱的识别呢?
以下是一个使用Python和OpenCV进行简单围棋棋盘棋子识别的示例代码,该代码能检测棋盘角点,并简单判断棋盘上可能的棋子位置(这里只是一个非常基础的示例,实际的围棋棋盘识别会更复杂,比如精确判断棋子颜色等,此代码仅作抛砖引玉):
python
运行
import cv2
import numpy as np

# 读取图像
image = cv2.imread('your_go_board_image.jpg')  # 请将此处替换为实际的围棋棋盘图片路径
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# 寻找棋盘角点
ret, corners = cv2.findChessboardCorners(gray, (19, 19), None)  # 假设是19x19的围棋棋盘

if ret:
    # 绘制检测到的角点
    cv2.drawChessboardCorners(image, (19, 19), corners, ret)

    # 将角点坐标转换为整数
    corners = np.int0(corners)

    # 简单划分棋盘格子,判断可能的棋子位置(这里只是简单判断是否有较大灰度变化,不准确)
    cell_size = (image.shape[0] // 19, image.shape[1] // 19)
    for i in range(19):
        for j in range(19):
            cell = gray[i * cell_size[0]:(i + 1) * cell_size[0], j * cell_size[1]:(j + 1) * cell_size[1]]
            mean_gray = np.mean(cell)
            if mean_gray < 100:  # 简单阈值判断,认为较暗区域可能有棋子
                cv2.circle(image, (j * cell_size[1] + cell_size[1] // 2, i * cell_size[0] + cell_size[0] // 2), 5, (0, 0, 255), -1)

    cv2.imshow('Chessboard with Corners and Possible Pieces', image)
    cv2.waitKey(0)
    cv2.destroyAllWindows()
else:
    print("棋盘角点未检测到")
请注意:
  1. 上述代码中 'your_go_board_image.jpg' 需要替换为你实际的围棋棋盘图片路径。
  2. 这里判断棋子位置的方式非常简单,只是基于灰度均值的阈值判断,实际情况中可能需要更复杂的算法,如基于机器学习的方法来准确识别棋子的颜色和位置。
  3. 此代码仅实现了基本的棋盘角点检测和初步的棋子位置判断,若要生成完整的棋谱(如SGF格式)还需要进一步的处理和转换逻辑。
你这个代码里面 cv2 是要安装什么样的 Python 的包呢?
cv2OpenCV 库在Python中的模块名称,要使用上述代码中的 cv2 相关功能,你需要安装 opencv-python 这个Python包。
你可以使用以下几种常见的方式来安装:
  1. 使用 pip 安装(最常用)
    打开命令行(在Windows上是命令提示符或PowerShell,在Linux和macOS上是终端),然后运行以下命令:
plaintext
pip install opencv-python
如果你使用的是Python3,可能需要使用 pip3 来安装,命令如下:
plaintext
pip3 install opencv-python
  1. 使用 conda 安装(如果你使用Anaconda或Miniconda环境)
    打开Anaconda Prompt(Windows)或终端(Linux、macOS),然后运行:
plaintext
conda install -c conda-forge opencv
安装完成后,你就可以在Python代码中使用 import cv2 导入 OpenCV 库进行图像处理和计算机视觉相关的操作了。
你要记住,我们要使用最简单的这个库,因为考虑到一个软件硬件的运应用成本的问题。你安装 Anaconda 这些东西会不会使得这个单片机根本跑不动啊我们所面向的是一个类似于嵌入式的系统,它的资源是很有限的。
确实,对于资源有限的嵌入式系统(如单片机、树莓派等),Anaconda 这样较大型的 Python 环境管理工具可能会占用过多资源导致运行困难。在这种情况下,你可以选择更轻量级的方式来安装 OpenCV 库。
以树莓派为例(基于 Raspbian 系统),以下是一种轻量级的安装方法:
  1. 更新系统软件包
    在树莓派的终端中,先运行以下命令更新系统已安装的软件包列表和已安装的软件包:
bash
运行
sudo apt update
sudo apt upgrade
  1. 安装依赖项
    OpenCV 运行需要一些依赖库,安装它们的命令如下:
bash
运行
sudo apt-get install build-essential cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
还可能需要一些其他的依赖,比如用于处理图像格式的库:
bash
运行
sudo apt-get install python3-dev python3-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev
(注意:树莓派系统默认的源中 libjasper-dev 可能没有,你可以尝试添加源或跳过这一项,如果后续 OpenCV 安装和使用时出现问题再处理)
  1. 下载 OpenCV 源代码
    从 OpenCV 的官方 GitHub 仓库下载源代码:
bash
运行
git clone https://github.com/opencv/opencv.git
cd opencv
git checkout 4.5.5  # 可以指定你想要的版本号,这里以 4.5.5 为例
  1. 配置和编译
    创建一个用于编译的目录并进入:
bash
运行
mkdir build
cd build
然后使用 cmake 进行配置,这里指定安装 Python 3 版本的 OpenCV(根据你的实际情况调整):
bash
运行
cmake -D CMAKE_BUILD_TYPE=RELEASE \
      -D CMAKE_INSTALL_PREFIX=/usr/local \
      -D PYTHON3_EXECUTABLE=/usr/bin/python3 \
      -D PYTHON3_INCLUDE_DIR=$(python3 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") \
      -D PYTHON3_PACKAGES_PATH=$(python3 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") \
      ..
最后进行编译和安装:
bash
运行
make -j4  # 根据树莓派的核心数调整 -j 后面的数字,加快编译速度
sudo make install
这样安装的 OpenCV 相对更轻量级,适合资源有限的嵌入式系统,能满足你在上面运行棋盘棋谱识别代码的需求。
那么你能不能写一个利用 OpenCV 来识别围棋图片,棋谱的图片,然后再把它转成 SGF 的格式,这样一个软程序呢?
以下是一个较为简化的 Python 程序示例,使用 OpenCV 进行围棋棋盘及棋子的基本识别,并结合 sgfmill 库将识别结果转换为 SGF 格式棋谱。需要注意的是,这只是一个基础示例,实际的围棋棋谱识别会面临很多复杂情况(如棋子粘连、光线干扰等),该示例仅作参考,你可能需要根据实际情况进一步优化和完善。
首先,确保你已经安装了 opencv-pythonsgfmill 库。可以使用以下命令安装:
plaintext
pip install opencv-python sgfmill
以下是代码部分:
python
运行
import cv2
import numpy as np
from sgfmill import sgf, board

# 读取图像
image = cv2.imread('your_go_board_image.jpg')  # 请替换为实际的围棋棋谱图片路径
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# 寻找棋盘角点
board_size = (19, 19)  # 假设是19x19的围棋棋盘
ret, corners = cv2.findChessboardCorners(gray, board_size, None)

if ret:
    # 对棋盘角点进行排序和矫正,以适应棋盘的正确布局
    corners = cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.1))
    corners = corners.reshape(-1, 2)

    # 计算棋盘格子的大小
    cell_size_x = (corners[1, 0] - corners[0, 0])
    cell_size_y = (corners[board_size[0], 1] - corners[0, 1])

    # 创建SGF棋盘对象
    go_board = board.Board(board_size[0], board_size[1])

    # 遍历棋盘格子,判断是否有棋子并添加到SGF棋谱中
    for y in range(board_size[1]):
        for x in range(board_size[0]):
            cell_x = int(corners[x + y * board_size[0], 0])
            cell_y = int(corners[x + y * board_size[0], 1])
            cell = gray[cell_y:cell_y + cell_size_y, cell_x:cell_x + cell_size_x]
            mean_gray = np.mean(cell)
            if mean_gray < 100:  # 简单阈值判断,认为较暗区域可能有黑子
                go_board.set((x, y), 'b')
            elif mean_gray > 150:  # 简单阈值判断,认为较亮区域可能有白子
                go_board.set((x, y), 'w')

    # 创建SGF棋谱对象并添加棋局信息
    game = sgf.Sgf_game()
    game.set_size(go_board.size)
    moves = go_board.get_empty_points() + go_board.get_occupied_points()
    for move in moves:
        color, pos = go_board.get(move)
        if color == 'b':
            game.add_move('b', pos)
        elif color == 'w':
            game.add_move('w', pos)

    # 将SGF棋谱保存为文件
    with open('output.sgf', 'w') as f:
        f.write(game.serialize())

    print("已成功将棋谱转换为SGF格式并保存为 output.sgf")
else:
    print("未检测到棋盘角点")
请将代码中的 'your_go_board_image.jpg' 替换为你实际的围棋棋谱图片路径。该程序通过检测棋盘角点来确定棋盘布局,然后简单地根据每个格子的灰度值判断是否有棋子以及棋子的颜色,最后将这些信息转换为 SGF 格式并保存为文件。
遇到错误 $ python gogo.py Traceback (most recent call last): File "/home/nick/work/gogogo/gogo.py", line 3, in <module> from sgfmill import sgf, board ImportError: cannot import name 'board' from 'sgfmill' (/home/nick/.local/lib/python3.10/site-packages/sgfmill/__init__.py)
这个错误表明在 sgfmill 库中找不到 board 模块,可能是因为 sgfmill 库的版本发生了变化,不同版本的模块结构有所不同。
在较新的 sgfmill 版本中,棋盘相关的功能可能在不同的模块里。你可以尝试以下解决办法:

1. 查看 sgfmill 文档和模块结构

要查看当前 sgfmill 库的正确模块结构和使用方法,可以参考其官方文档。不过,我们也可以通过代码简单探索一下 sgfmill 库中的模块:
python
运行
import sgfmill

print(dir(sgfmill))
运行上述代码,它会输出 sgfmill 库中所有可用的属性和模块,这样你就能知道正确的模块名了。

2. 修改代码以适配新版本

board 模块名称有变化,你得相应地修改代码。在较新的 sgfmill 版本中,棋盘操作可能在 sgfmill.boards 模块里。以下是修改后的代码:
python
运行
import cv2
import numpy as np
from sgfmill import sgf
from sgfmill.boards import Board

# 读取图像
image = cv2.imread('your_go_board_image.jpg')  # 请替换为实际的围棋棋谱图片路径
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# 寻找棋盘角点
board_size = (19, 19)  # 假设是19x19的围棋棋盘
ret, corners = cv2.findChessboardCorners(gray, board_size, None)

if ret:
    # 对棋盘角点进行排序和矫正,以适应棋盘的正确布局
    corners = cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.1))
    corners = corners.reshape(-1, 2)

    # 计算棋盘格子的大小
    cell_size_x = (corners[1, 0] - corners[0, 0])
    cell_size_y = (corners[board_size[0], 1] - corners[0, 1])

    # 创建SGF棋盘对象
    go_board = Board(board_size[0])

    # 遍历棋盘格子,判断是否有棋子并添加到SGF棋谱中
    for y in range(board_size[1]):
        for x in range(board_size[0]):
            cell_x = int(corners[x + y * board_size[0], 0])
            cell_y = int(corners[x + y * board_size[0], 1])
            cell = gray[cell_y:cell_y + cell_size_y, cell_x:cell_x + cell_size_x]
            mean_gray = np.mean(cell)
            if mean_gray < 100:  # 简单阈值判断,认为较暗区域可能有黑子
                go_board.play((x, y), 'b')
            elif mean_gray > 150:  # 简单阈值判断,认为较亮区域可能有白子
                go_board.play((x, y), 'w')

    # 创建SGF棋谱对象并添加棋局信息
    game = sgf.Sgf_game(size=board_size[0])
    for move in go_board.list_moves():
        color, pos = move
        game.add_move(color, pos)

    # 将SGF棋谱保存为文件
    with open('output.sgf', 'w') as f:
        f.write(game.serialise())

    print("已成功将棋谱转换为SGF格式并保存为 output.sgf")
else:
    print("未检测到棋盘角点")

3. 更新或回退 sgfmill 版本

你也可以尝试更新或回退 sgfmill 到一个已知稳定的版本:
  • 更新到最新版本:
bash
运行
pip install --upgrade sgfmill
  • 回退到指定版本(例如 1.1 版本):
bash
运行
pip install sgfmill==1.1
通过以上步骤,你应该能够解决 ImportError 问题。
$ python Python 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import sgfmill >>> print(dir(sgfmill)) ['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '__version__'] >>>
从你执行 print(dir(sgfmill)) 的输出结果来看,导入的 sgfmill 模块似乎没有包含我们预期的子模块,这可能意味着 sgfmill 库没有正确安装或者安装的版本不完整。
以下是一些可以尝试的解决办法:

1. 重新安装 sgfmill

首先,卸载现有的 sgfmill 库,然后重新安装:
bash
运行
pip uninstall sgfmill
pip install sgfmill

2. 检查 Python 环境

确保你使用的 Python 环境是正确的。有时候,你可能在不同的虚拟环境或者系统 Python 环境中安装和使用库,这可能会导致问题。可以通过以下命令查看当前 Python 环境中已安装的库:
bash
运行
pip list
确认 sgfmill 确实已经安装在你当前使用的 Python 环境中。

3. 查看 sgfmill 安装路径

可以使用以下 Python 代码查看 sgfmill 的安装路径:
python
运行
import sgfmill
print(sgfmill.__file__)
确认该路径指向的是正确的 sgfmill 安装目录,并且该目录下包含完整的库文件。

4. 尝试使用其他版本

如果当前版本的 sgfmill 有问题,可以尝试安装其他版本。例如,安装 1.1 版本:
bash
运行
pip install sgfmill==1.1
完成上述操作后,再次尝试导入 sgfmill 并查看其属性和子模块:
python
运行
import sgfmill
print(dir(sgfmill))
如果问题仍然存在,可能需要进一步检查系统的 Python 配置或者查看 sgfmill 的官方文档和社区论坛,以获取更多帮助。
$ pip list | grep sgf sgfmill 1.1.1
从你执行 pip list | grep sgf 的结果可知,sgfmill 1.1.1 版本已成功安装。不过在导入时,未显示出预期的子模块,这或许是由于库的初始化文件存在问题,或者 Python 解释器未能正确加载这些子模块。
你可以尝试直接导入 sgfmill 中的子模块,查看是否能正常工作,示例如下:
python
运行
from sgfmill import sgf
from sgfmill import boards
把这两行代码添加到你的 Python 脚本里,再次运行。若能正常导入,说明子模块本身没问题,只是在使用 dir(sgfmill) 时未正确显示。
同时,我对之前提供的代码进行了更新,确保其与 sgfmill 1.1.1 版本兼容:
python
运行
import cv2
import numpy as np
from sgfmill import sgf
from sgfmill import boards

# 读取图像
image = cv2.imread('your_go_board_image.jpg')  # 请替换为实际的围棋棋谱图片路径
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# 寻找棋盘角点
board_size = (19, 19)  # 假设是19x19的围棋棋盘
ret, corners = cv2.findChessboardCorners(gray, board_size, None)

if ret:
    # 对棋盘角点进行排序和矫正,以适应棋盘的正确布局
    corners = cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.1))
    corners = corners.reshape(-1, 2)

    # 计算棋盘格子的大小
    cell_size_x = (corners[1, 0] - corners[0, 0])
    cell_size_y = (corners[board_size[0], 1] - corners[0, 1])

    # 创建SGF棋盘对象
    go_board = boards.Board(board_size[0])

    # 遍历棋盘格子,判断是否有棋子并添加到SGF棋谱中
    for y in range(board_size[1]):
        for x in range(board_size[0]):
            cell_x = int(corners[x + y * board_size[0], 0])
            cell_y = int(corners[x + y * board_size[0], 1])
            cell = gray[cell_y:cell_y + cell_size_y, cell_x:cell_x + cell_size_x]
            mean_gray = np.mean(cell)
            if mean_gray < 100:  # 简单阈值判断,认为较暗区域可能有黑子
                go_board.play((x, y), 'b')
            elif mean_gray > 150:  # 简单阈值判断,认为较亮区域可能有白子
                go_board.play((x, y), 'w')

    # 创建SGF棋谱对象并添加棋局信息
    game = sgf.Sgf_game(size=board_size[0])
    for color, move in go_board.list_moves():
        if move is not None:
            game.add_move(color, move)

    # 将SGF棋谱保存为文件
    with open('output.sgf', 'w') as f:
        f.write(game.serialise())

    print("已成功将棋谱转换为SGF格式并保存为 output.sgf")
else:
    print("未检测到棋盘角点")
将上述代码保存为一个 Python 文件(例如 gogo.py),并把 'your_go_board_image.jpg' 替换成实际的围棋棋谱图片路径,然后运行这个脚本,看看能否正常将棋谱转换为 SGF 格式。
我运行你的程序得到错误 未检测到棋盘角点
出现“未检测到棋盘角点”错误,可能有以下原因及解决办法:

原因一:图像预处理问题

  1. 灰度转换效果不佳:原始图像的色彩模式可能影响灰度转换质量,导致后续角点检测困难。
    解决办法:尝试在灰度转换前,对图像进行色彩空间调整,比如先转换到 LABHSV 色彩空间,再提取灰度图。示例代码如下:
python
运行
# 转换到LAB色彩空间后提取灰度图
image_lab = cv2.cvtColor(image, cv2.COLOR_BGR2LAB)
gray = cv2.cvtColor(image_lab, cv2.COLOR_LAB2GRAY)

# 转换到HSV色彩空间后提取灰度图
image_hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
gray = cv2.cvtColor(image_hsv, cv2.COLOR_HSV2GRAY)
  1. 图像噪声干扰:图像中的噪声可能影响角点检测算法的准确性。
    解决办法:对灰度图像进行滤波处理,去除噪声。常用的滤波方法有高斯滤波、中值滤波等。以高斯滤波为例:
python
运行
gray = cv2.GaussianBlur(gray, (5, 5), 0)  # (5, 5)是高斯核大小,0是标准差

原因二:棋盘角点检测参数设置问题

  1. 棋盘格尺寸设置错误:代码中假设棋盘是 19x19,如果实际棋盘尺寸不是这个,就检测不到角点。
    解决办法:确认实际棋盘尺寸,修改 board_size 参数。比如如果是 13x13 棋盘,修改为 board_size = (13, 13)
  2. 角点检测算法参数不合适cv2.findChessboardCorners 函数默认参数可能不适合当前图像。
    解决办法:尝试调整函数的参数,比如增加 flags 参数的值。示例:
python
运行
ret, corners = cv2.findChessboardCorners(gray, board_size, None, cv2.CALIB_CB_ADAPTIVE_THRESH + cv2.CALIB_CB_FAST_CHECK + cv2.CALIB_CB_NORMALIZE_IMAGE)
其中 cv2.CALIB_CB_ADAPTIVE_THRESH 会使用自适应阈值,cv2.CALIB_CB_FAST_CHECK 会快速检查棋盘是否存在,cv2.CALIB_CB_NORMALIZE_IMAGE 会对图像进行归一化处理。

原因三:图像本身问题

  1. 图像分辨率过低:分辨率低可能导致角点特征不明显,难以检测。
    解决办法:提高图像分辨率,或者对图像进行放大处理。可以使用 cv2.resize 函数,示例:
python
运行
image = cv2.resize(image, None, fx=2, fy=2, interpolation=cv2.INTER_LINEAR)  # 放大2倍
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
  1. 棋盘特征不清晰:比如棋盘线条模糊、颜色对比度低等。
    解决办法:通过图像增强技术,如直方图均衡化,提高图像对比度。示例:
python
运行
gray = cv2.equalizeHist(gray)
go.cpp
File
这是我的代码,使用opencv识别围棋生成棋谱,但是总是和正确的答案少很多, 你能改正吗?
当使用OpenCV识别围棋并生成棋谱时,识别结果比正确答案少很多,可能是由多种原因导致的,比如棋盘角点检测不准确、网格线检测不完整、棋子识别阈值设置不合理等。下面我们逐步分析并尝试改进代码。

可能的改进点

  1. 角点检测:提高角点检测的准确性,避免因角点检测不准确导致后续透视变换和网格线检测出现问题。
  2. 网格线检测:调整Hough变换的参数,提高网格线检测的完整性。
  3. 棋子识别:优化棋子识别的阈值,确保能够准确识别黑白棋子。

改进后的代码

cpp
运行
#include <algorithm>
#include <cmath>
#include <fstream>
#include <iostream>
#include <opencv2/opencv.hpp>
#include <vector>

using namespace cv;
using namespace std;

struct Line {
  Point p1;
  Point p2;
  Vec4f lineParams; // For fitLine results
};

string toSGFCoord(int x_index, int y_index) {
  if (x_index < 0 || x_index >= 19 || y_index < 0 || y_index >= 19) {
    return "";
  }
  char col = 'a' + x_index;
  if (col >= 'i')
    col++; // Skip 'i'
  char row = 'a' + y_index;
  return string(1, col) + string(1, row);
}

vector<Point> detectBoardCorners(const Mat &image, Mat &processedImage) {
  Mat gray = image.clone();
  cvtColor(gray, gray, COLOR_BGR2GRAY);
  GaussianBlur(gray, gray, Size(5, 5), 0);

  vector<Vec4i> lines;
  HoughLinesP(gray, lines, 1, CV_PI / 180, 50, 50, 10); // Adjusted threshold

  vector<Vec4i> horizontalLines, verticalLines;
  for (const auto &line : lines) {
    float angle = atan2(line[3] - line[1], line[2] - line[0]);
    if (abs(angle) < CV_PI / 10) {
      horizontalLines.push_back(line);
    } else if (abs(abs(angle) - CV_PI / 2) < CV_PI / 10) {
      verticalLines.push_back(line);
    }
  }

  // Sort lines by their y or x coordinates
  sort(horizontalLines.begin(), horizontalLines.end(),
       [](const Vec4i &a, const Vec4i &b) {
         return (a[1] + a[3]) / 2 < (b[1] + b[3]) / 2;
       });
  sort(verticalLines.begin(), verticalLines.end(),
       [](const Vec4i &a, const Vec4i &b) {
         return (a[0] + a[2]) / 2 < (b[0] + b[2]) / 2;
       });

  Point2f intersections[4];
  bool foundAll = false;

  if (horizontalLines.size() >= 2 && verticalLines.size() >= 2) {
    // Take the topmost and bottommost horizontal lines, and leftmost and
    // rightmost vertical
    Vec4i topH = horizontalLines.front();
    Vec4i bottomH = horizontalLines.back();
    Vec4i leftV = verticalLines.front();
    Vec4i rightV = verticalLines.back();

    auto intersect = [](const Vec4i &l1, const Vec4i &l2) -> Point2f {
      float x1 = l1[0], y1 = l1[1], x2 = l1[2], y2 = l1[3];
      float x3 = l2[0], y3 = l2[1], x4 = l2[2], y4 = l2[3];
      float det = (x1 - x2) * (y3 - y4) - (y1 - y2) * (x3 - x4);
      if (abs(det) < 1e-6)
        return Point2f(-1, -1); // Parallel
      float t = ((x1 - x3) * (y3 - y4) - (y1 - y3) * (x3 - x4)) / det;
      float u = -((x1 - x2) * (y1 - y3) - (y1 - y2) * (x1 - x3)) / det;
      if (t >= 0 && t <= 1 && u >= 0 &&
          u <= 1) { // Intersection within line segments
        return Point2f(x1 + t * (x2 - x1), y1 + t * (y2 - y1));
      } else {
        return Point2f(x1 + t * (x2 - x1),
                       y1 + t * (y2 - y1)); // Lines extended
      }
    };

    intersections[0] = intersect(topH, leftV);     // Top-left
    intersections[1] = intersect(topH, rightV);    // Top-right
    intersections[2] = intersect(bottomH, rightV); // Bottom-right
    intersections[3] = intersect(bottomH, leftV);  // Bottom-left

    bool allValid = true;
    for (const auto &p : intersections) {
      if (p.x < 0 || p.y < 0 || p.x >= image.cols || p.y >= image.rows) {
        allValid = false;
        break;
      }
    }

    if (allValid) {
      foundAll = true;
    }
  }

  vector<Point> selected_corners;
  if (foundAll) {
    selected_corners = {Point(intersections[0]), Point(intersections[1]),
                        Point(intersections[2]), Point(intersections[3])};
  } else {
    cerr << "Error: Could not find reliable board corners using Hough lines."
         << endl;
  }

  processedImage = image.clone();
  for (const auto &p : selected_corners) {
    circle(processedImage, p, 10, Scalar(255, 0, 0), -1);
  }

  return selected_corners;
}

// In correctPerspective():
Mat correctPerspective(const Mat &image, const vector<Point> &corners) {
  if (corners.size() != 4) {
    cerr << "Error: Need 4 corners for perspective correction." << endl;
    return image.clone();
  }

  float side = 500.0;
  Point2f target_corners[4] = {Point2f(0, 0), Point2f(side - 1, 0),
                               Point2f(side - 1, side - 1),
                               Point2f(0, side - 1)};

  Point2f src_corners_array[4];
  for (int i = 0; i < 4; ++i) {
    src_corners_array[i] = Point2f(corners[i]);
  }

  // Wrap points in OpenCV Mat structures
  Mat src_points(1, 4, CV_32FC2, src_corners_array);
  Mat dst_points(1, 4, CV_32FC2, target_corners);

  Mat homography = findHomography(src_points, dst_points, 0); // Corrected call
  Mat warped_image;
  warpPerspective(image, warped_image, homography, Size(side, side));

  return warped_image;
}

void detectGridLines(const Mat &image, vector<Line> &horizontalLines,
                     vector<Line> &verticalLines) {
  Mat gray, blurred, edges;
  cvtColor(image, gray, COLOR_BGR2GRAY);
  GaussianBlur(gray, blurred, Size(5, 5), 0);
  Canny(blurred, edges, 30, 100); // Adjusted Canny thresholds

  vector<Vec4i> lines;
  HoughLinesP(edges, lines, 1, CV_PI / 180, 50, 30, 10); // Adjusted parameters

  for (const auto &line : lines) {
    float dx = line[2] - line[0];
    float dy = line[3] - line[1];
    float angle = atan2(dy, dx) * 180 / CV_PI;
    if (abs(angle) < 15) { // Increased angle tolerance
      horizontalLines.push_back(
          {Point(line[0], line[1]), Point(line[2], line[3])});
    } else if (abs(abs(angle) - 90) < 15) {
      verticalLines.push_back(
          {Point(line[0], line[1]), Point(line[2], line[3])});
    }
  }
}

void identifyGrid(vector<Line> &horizontalLines, vector<Line> &verticalLines,
                  vector<Line> &selectedHorizontal,
                  vector<Line> &selectedVertical, const Mat &image) {
  // Sort lines
  sort(horizontalLines.begin(), horizontalLines.end(),
       [](const Line &a, const Line &b) {
         return (a.p1.y + a.p2.y) / 2 < (b.p1.y + b.p2.y) / 2;
       });
  sort(verticalLines.begin(), verticalLines.end(),
       [](const Line &a, const Line &b) {
         return (a.p1.x + a.p2.x) / 2 < (b.p1.x + b.p2.x) / 2;
       });

  // Select 19 lines with approximate spacing
  auto selectLines = [](vector<Line> &lines, bool horizontal, float imageSize) {
    vector<Line> selected;
    if (lines.empty())
      return selected;

    float avgSpacing = imageSize / 18.0f; // Average spacing for 19 lines
    selected.push_back(lines.front());

    for (size_t i = 1; i < lines.size(); ++i) {
      float pos = horizontal ? (lines[i].p1.y + lines[i].p2.y) / 2.0f
                             : (lines[i].p1.x + lines[i].p2.x) / 2.0f;
      float lastPos =
          horizontal ? (selected.back().p1.y + selected.back().p2.y) / 2.0f
                     : (selected.back().p1.x + selected.back().p2.x) / 2.0f;
      if (abs(pos - lastPos) >= avgSpacing * 0.7 && selected.size() < 19) {
        selected.push_back(lines[i]);
      }
      if (selected.size() >= 19)
        break; // Ensure we don't go over 19
    }

    // Extend lines to image boundaries
    if (horizontal) {
      for (auto &line : selected) {
        line.p1.x = 0;
        line.p2.x = imageSize - 1;
      }
    } else {
      for (auto &line : selected) {
        line.p1.y = 0;
        line.p2.y = imageSize - 1;
      }
    }

    return selected;
  };

  selectedHorizontal =
      selectLines(horizontalLines, true, static_cast<float>(image.rows));
  selectedVertical =
      selectLines(verticalLines, false, static_cast<float>(image.cols));
}

vector<vector<char>>
detectStonesAtIntersections(const Mat &image,
                            const vector<Line> &horizontalLines,
                            const vector<Line> &verticalLines) {
  int boardSize = 19;
  vector<vector<char>> boardState(boardSize, vector<char>(boardSize, '.'));

  if (horizontalLines.size() < boardSize || verticalLines.size() < boardSize) {
    cerr << "Not enough grid lines detected." << endl;
    return boardState;
  }

  for (int i = 0; i < boardSize; ++i) {
    for (int j = 0; j < boardSize; ++j) {
      Point2f intersection(
          (verticalLines[j].p1.x + verticalLines[j].p2.x) / 2.0f,
          (horizontalLines[i].p1.y + horizontalLines[i].p2.y) / 2.0f);

      if (intersection.x < 0 || intersection.x >= image.cols ||
          intersection.y < 0 || intersection.y >= image.rows)
        continue;

      // Check the center color
      Mat hsv;
      cvtColor(image, hsv, COLOR_BGR2HSV);
      Vec3b centerColor = hsv.at<Vec3b>(intersection.y, intersection.x);
      int v = centerColor[2];
      int s = centerColor[1];

      if (v < 80 && s > 30) { // Adjusted black stone threshold
        boardState[i][j] = 'B';
      } else if (v > 180 && s < 80) { // Adjusted white stone threshold
        boardState[i][j] = 'W';
      }
    }
  }

  return boardState;
}

string boardStateToSGF(const vector<vector<char>> &boardState) {
  string sgfContent = "(;FF[4]GM[1]SZ[19]";
  string addBlack, addWhite;

  for (int y = 0; y < 19; ++y) {
    for (int x = 0; x < 19; ++x) {
      string coord = toSGFCoord(x, 18 - y); // Invert y-coordinate here
      if (!coord.empty()) {
        if (boardState[y][x] == 'B') {
          addBlack += "[" + coord + "]";
        } else if (boardState[y][x] == 'W') {
          addWhite += "[" + coord + "]";
        }
      }
    }
  }

  if (!addBlack.empty())
    sgfContent += "AB" + addBlack;
  if (!addWhite.empty())
    sgfContent += "AW" + addWhite;
  sgfContent += ")\n";
  return sgfContent;
}

int main() {
  string imageFile = "go_board.jpg";
  Mat originalImage = imread(imageFile, IMREAD_COLOR);
  if (originalImage.empty()) {
    cerr << "Error: Could not open image." << endl;
    return -1;
  }

  Mat processedImage;
  vector<Point> corners = detectBoardCorners(originalImage, processedImage);

  Mat warpedImage;
  if (corners.size() == 4) {
    warpedImage = correctPerspective(originalImage, corners);
  } else {
    warpedImage = originalImage.clone();
  }

  vector<Line> horizontalLines, verticalLines;
  detectGridLines(warpedImage, horizontalLines, verticalLines);

  vector<Line> selectedHorizontal, selectedVertical;
  identifyGrid(horizontalLines, verticalLines, selectedHorizontal,
               selectedVertical, warpedImage);

  vector<vector<char>> boardState =
      detectStonesAtIntersections(warpedImage, selectedHorizontal,
                                  selectedVertical);

  string sgfContent = boardStateToSGF(boardState);
  cout << sgfContent << endl;

  return 0;
}

主要改进点

  1. 角点检测:调整了HoughLinesP的阈值,提高角点检测的准确性。
  2. 网格线检测:调整了Canny边缘检测的阈值和HoughLinesP的参数,提高网格线检测的完整性。
  3. 棋子识别:调整了黑白棋子识别的阈值,使其更适应不同的光照条件。

注意事项

  • 不同的图像可能需要不同的参数,你可以根据实际情况进一步调整。
  • 确保输入的图像质量良好,避免模糊、光照不均等问题。
go.cpp
$ ./go.exe Error: Could not find reliable board corners using Hough lines. (;FF[4]GM[1]SZ[19]AB[li][rc]AW[or][lq][mq][pq][qq][sq][tq][dp][hp][kp][lp][qp][rp][tp][eo][ko][mo][no][ro][to][dn][hn][jn][kn][qn][fm][jm][qm][sm][el][fl][nl][ol][ql][rl][sl][kk][nk][ej][pj][qj][ei][gi][ki][pi][ri][eh][ph][qh][rh][fg][gg][mg][cf][mf][de][ee][he][je][ke][le][ne][oe][pe][cd][ed][gd][md][qd][dc][hc][jc][kc][db][gb][qb][ba][ca])
从你给出的错误信息 Error: Could not find reliable board corners using Hough lines. 可知,程序在检测棋盘角点时碰到了问题,未能找到可靠的角点。这或许会对后续的透视变换、网格线检测以及棋子识别产生影响,进而造成识别的棋子数量比实际数量少。下面为你分析可能的原因并给出相应的解决办法:

可能的原因及解决办法

1. Hough 变换参数设置不合理

detectBoardCorners 函数里,HoughLinesP 的参数可能设置得过于严格,从而导致无法检测到足够的直线来确定角点。你可以试着降低 threshold 参数的值。
cpp
运行
// 修改前
HoughLinesP(gray, lines, 1, CV_PI / 180, 100, 50, 10);

// 修改后
HoughLinesP(gray, lines, 1, CV_PI / 180, 50, 30, 10); // 降低阈值

2. 图像预处理效果不佳

图像的模糊、光照不均等状况可能会影响边缘检测和直线检测的效果。你可以尝试运用更复杂的图像预处理方法,例如自适应阈值处理。
cpp
运行
Mat gray = image.clone();
cvtColor(gray, gray, COLOR_BGR2GRAY);
// 使用自适应阈值处理
adaptiveThreshold(gray, gray, 255, ADAPTIVE_THRESH_GAUSSIAN_C, THRESH_BINARY_INV, 11, 2);

3. 角点检测逻辑存在问题

当前的角点检测逻辑是选取最顶部、最底部的水平线和最左侧、最右侧的垂直线的交点作为角点,这种方法可能不够稳定。你可以尝试使用更复杂的角点检测算法,比如 findChessboardCorners 或者 goodFeaturesToTrack

4. 棋子识别阈值设置不合理

detectStonesAtIntersections 函数中,黑白棋子的识别阈值可能设置得不够准确,导致部分棋子无法被正确识别。你可以尝试调整阈值。
cpp
运行
// 修改前
if (v < 50 && s > 50) { // Black stone
    boardState[i][j] = 'B';
} else if (v > 200 && s < 50) { // White stone
    boardState[i][j] = 'W';
}

// 修改后
if (v < 80 && s > 30) { // Black stone
    boardState[i][j] = 'B';
} else if (v > 180 && s < 80) { // White stone
    boardState[i][j] = 'W';
}

修改后的代码示例

cpp
运行
#include <algorithm>
#include <cmath>
#include <fstream>
#include <iostream>
#include <opencv2/opencv.hpp>
#include <vector>

using namespace cv;
using namespace std;

struct Line {
    Point p1;
    Point p2;
    Vec4f lineParams; // For fitLine results
};

string toSGFCoord(int x_index, int y_index) {
    if (x_index < 0 || x_index >= 19 || y_index < 0 || y_index >= 19) {
        return "";
    }
    char col = 'a' + x_index;
    if (col >= 'i')
        col++; // Skip 'i'
    char row = 'a' + y_index;
    return string(1, col) + string(1, row);
}

vector<Point> detectBoardCorners(const Mat &image, Mat &processedImage) {
    Mat gray = image.clone();
    cvtColor(gray, gray, COLOR_BGR2GRAY);
    // 使用自适应阈值处理
    adaptiveThreshold(gray, gray, 255, ADAPTIVE_THRESH_GAUSSIAN_C, THRESH_BINARY_INV, 11, 2);

    vector<Vec4i> lines;
    // 降低阈值
    HoughLinesP(gray, lines, 1, CV_PI / 180, 50, 30, 10);

    vector<Vec4i> horizontalLines, verticalLines;
    for (const auto &line : lines) {
        float angle = atan2(line[3] - line[1], line[2] - line[0]);
        if (abs(angle) < CV_PI / 10) {
            horizontalLines.push_back(line);
        } else if (abs(abs(angle) - CV_PI / 2) < CV_PI / 10) {
            verticalLines.push_back(line);
        }
    }

    // Sort lines by their y or x coordinates
    sort(horizontalLines.begin(), horizontalLines.end(),
         [](const Vec4i &a, const Vec4i &b) {
             return (a[1] + a[3]) / 2 < (b[1] + b[3]) / 2;
         });
    sort(verticalLines.begin(), verticalLines.end(),
         [](const Vec4i &a, const Vec4i &b) {
             return (a[0] + a[2]) / 2 < (b[0] + b[2]) / 2;
         });

    Point2f intersections[4];
    bool foundAll = false;

    if (horizontalLines.size() >= 2 && verticalLines.size() >= 2) {
        // Take the topmost and bottommost horizontal lines, and leftmost and
        // rightmost vertical
        Vec4i topH = horizontalLines.front();
        Vec4i bottomH = horizontalLines.back();
        Vec4i leftV = verticalLines.front();
        Vec4i rightV = verticalLines.back();

        auto intersect = [](const Vec4i &l1, const Vec4i &l2) -> Point2f {
            float x1 = l1[0], y1 = l1[1], x2 = l1[2], y2 = l1[3];
            float x3 = l2[0], y3 = l2[1], x4 = l2[2], y4 = l2[3];
            float det = (x1 - x2) * (y3 - y4) - (y1 - y2) * (x3 - x4);
            if (abs(det) < 1e-6)
                return Point2f(-1, -1); // Parallel
            float t = ((x1 - x3) * (y3 - y4) - (y1 - y3) * (x3 - x4)) / det;
            float u = -((x1 - x2) * (y1 - y3) - (y1 - y2) * (x1 - x3)) / det;
            if (t >= 0 && t <= 1 && u >= 0 &&
                u <= 1) { // Intersection within line segments
                return Point2f(x1 + t * (x2 - x1), y1 + t * (y2 - y1));
            } else {
                return Point2f(x1 + t * (x2 - x1),
                               y1 + t * (y2 - y1)); // Lines extended
            }
        };

        intersections[0] = intersect(topH, leftV);     // Top-left
        intersections[1] = intersect(topH, rightV);    // Top-right
        intersections[2] = intersect(bottomH, rightV); // Bottom-right
        intersections[3] = intersect(bottomH, leftV);  // Bottom-left

        bool allValid = true;
        for (const auto &p : intersections) {
            if (p.x < 0 || p.y < 0 || p.x >= image.cols || p.y >= image.rows) {
                allValid = false;
                break;
            }
        }

        if (allValid) {
            foundAll = true;
        }
    }

    vector<Point> selected_corners;
    if (foundAll) {
        selected_corners = {Point(intersections[0]), Point(intersections[1]),
                            Point(intersections[2]), Point(intersections[3])};
    } else {
        cerr << "Error: Could not find reliable board corners using Hough lines." << endl;
    }

    processedImage = image.clone();
    for (const auto &p : selected_corners) {
        circle(processedImage, p, 10, Scalar(255, 0, 0), -1);
    }

    return selected_corners;
}

// In correctPerspective():
Mat correctPerspective(const Mat &image, const vector<Point> &corners) {
    if (corners.size() != 4) {
        cerr << "Error: Need 4 corners for perspective correction." << endl;
        return image.clone();
    }

    float side = 500.0;
    Point2f target_corners[4] = {Point2f(0, 0), Point2f(side - 1, 0),
                                 Point2f(side - 1, side - 1),
                                 Point2f(0, side - 1)};

    Point2f src_corners_array[4];
    for (int i = 0; i < 4; ++i) {
        src_corners_array[i] = Point2f(corners[i]);
    }

    // Wrap points in OpenCV Mat structures
    Mat src_points(1, 4, CV_32FC2, src_corners_array);
    Mat dst_points(1, 4, CV_32FC2, target_corners);

    Mat homography = findHomography(src_points, dst_points, 0); // Corrected call
    Mat warped_image;
    warpPerspective(image, warped_image, homography, Size(side, side));

    return warped_image;
}

void detectGridLines(const Mat &image, vector<Line> &horizontalLines,
                     vector<Line> &verticalLines) {
    Mat gray, blurred, edges;
    cvtColor(image, gray, COLOR_BGR2GRAY);
    GaussianBlur(gray, blurred, Size(5, 5), 0);
    Canny(blurred, edges, 50, 150);

    vector<Vec4i> lines;
    HoughLinesP(edges, lines, 1, CV_PI / 180, 100, 30, 10); // Adjusted parameters

    for (const auto &line : lines) {
        float dx = line[2] - line[0];
        float dy = line[3] - line[1];
        float angle = atan2(dy, dx) * 180 / CV_PI;
        if (abs(angle) < 15) { // Increased angle tolerance
            horizontalLines.push_back(
                    {Point(line[0], line[1]), Point(line[2], line[3])});
        } else if (abs(abs(angle) - 90) < 15) {
            verticalLines.push_back(
                    {Point(line[0], line[1]), Point(line[2], line[3])});
        }
    }
}

void identifyGrid(vector<Line> &horizontalLines, vector<Line> &verticalLines,
                  vector<Line> &selectedHorizontal,
                  vector<Line> &selectedVertical, const Mat &image) {
    // Sort lines
    sort(horizontalLines.begin(), horizontalLines.end(),
         [](const Line &a, const Line &b) {
             return (a.p1.y + a.p2.y) / 2 < (b.p1.y + b.p2.y) / 2;
         });
    sort(verticalLines.begin(), verticalLines.end(),
         [](const Line &a, const Line &b) {
             return (a.p1.x + a.p2.x) / 2 < (b.p1.x + b.p2.x) / 2;
         });

    // Select 19 lines with approximate spacing
    auto selectLines = [](vector<Line> &lines, bool horizontal, float imageSize) {
        vector<Line> selected;
        if (lines.empty())
            return selected;

        float avgSpacing = imageSize / 18.0f; // Average spacing for 19 lines
        selected.push_back(lines.front());

        for (size_t i = 1; i < lines.size(); ++i) {
            float pos = horizontal ? (lines[i].p1.y + lines[i].p2.y) / 2.0f
                                   : (lines[i].p1.x + lines[i].p2.x) / 2.0f;
            float lastPos =
                    horizontal ? (selected.back().p1.y + selected.back().p2.y) / 2.0f
                               : (selected.back().p1.x + selected.back().p2.x) / 2.0f;
            if (abs(pos - lastPos) >= avgSpacing * 0.7 && selected.size() < 19) {
                selected.push_back(lines[i]);
            }
            if (selected.size() >= 19)
                break; // Ensure we don't go over 19
        }

        // Extend lines to image boundaries
        if (horizontal) {
            for (auto &line : selected) {
                line.p1.x = 0;
                line.p2.x = imageSize - 1;
            }
        } else {
            for (auto &line : selected) {
                line.p1.y = 0;
                line.p2.y = imageSize - 1;
            }
        }

        return selected;
    };

    selectedHorizontal =
            selectLines(horizontalLines, true, static_cast<float>(image.rows));
    selectedVertical =
            selectLines(verticalLines, false, static_cast<float>(image.cols));
}

vector<vector<char>>
detectStonesAtIntersections(const Mat &image,
                            const vector<Line> &horizontalLines,
                            const vector<Line> &verticalLines) {
    int boardSize = 19;
    vector<vector<char>> boardState(boardSize, vector<char>(boardSize, '.'));

    if (horizontalLines.size() < boardSize || verticalLines.size() < boardSize) {
        cerr << "Not enough grid lines detected." << endl;
        return boardState;
    }

    for (int i = 0; i < boardSize; ++i) {
        for (int j = 0; j < boardSize; ++j) {
            Point2f intersection(
                    (verticalLines[j].p1.x + verticalLines[j].p2.x) / 2.0f,
                    (horizontalLines[i].p1.y + horizontalLines[i].p2.y) / 2.0f);

            if (intersection.x < 0 || intersection.x >= image.cols ||
                intersection.y < 0 || intersection.y >= image.rows)
                continue;

            // Check the center color
            Mat hsv;
            cvtColor(image, hsv, COLOR_BGR2HSV);
            Vec3b centerColor = hsv.at<Vec3b>(intersection.y, intersection.x);
            int v = centerColor[2];
            int s = centerColor[1];

            // 调整棋子识别阈值
            if (v < 80 && s > 30) { // Black stone
                boardState[i][j] = 'B';
            } else if (v > 180 && s < 80) { // White stone
                boardState[i][j] = 'W';
            }
        }
    }

    return boardState;
}

string boardStateToSGF(const vector<vector<char>> &boardState) {
    string sgfContent = "(;FF[4]GM[1]SZ[19]";
    string addBlack, addWhite;

    for (int y = 0; y < 19; ++y) {
        for (int x = 0; x < 19; ++x) {
            string coord = toSGFCoord(x, 18 - y); // Invert y-coordinate here
            if (!coord.empty()) {
                if (boardState[y][x] == 'B') {
                    addBlack += "[" + coord + "]";
                } else if (boardState[y][x] == 'W') {
                    addWhite += "[" + coord + "]";
                }
            }
        }
    }

    if (!addBlack.empty())
        sgfContent += "AB" + addBlack;
    if (!addWhite.empty())
        sgfContent += "AW" + addWhite;
    sgfContent += ")\n";
    return sgfContent;
}

int main() {
    string imageFile = "go_board.jpg";
    Mat originalImage = imread(imageFile, IMREAD_COLOR);
    if (originalImage.empty()) {
        cerr << "Error: Could not open image." << endl;
        return -1;
    }

    Mat processedImage;
    vector<Point> corners = detectBoardCorners(originalImage, processedImage);

    Mat warpedImage;
    if (corners.size() == 4) {
        warpedImage = correctPerspective(originalImage, corners);
        imshow("Warped", warpedImage);
    } else {
        warpedImage = originalImage.clone();
        cerr << "Perspective correction skipped." << endl;
    }

    vector<
go.cpp
your code is incomplete, it stops after following line: Mat warpedImage; if (corners.size() == 4) { warpedImage = correctPerspective(originalImage, corners); imshow("Warped", warpedImage); } else { warpedImage = originalImage.clone(); cerr << "Perspective correction skipped." << endl; }
看起来你提到的代码片段确实没有完整展示 main 函数后续的逻辑。以下是补全后的完整代码,包含了检测网格线、识别网格、检测棋子、生成 SGF 文件以及可视化等功能:
cpp
运行
#include <algorithm>
#include <cmath>
#include <fstream>
#include <iostream>
#include <opencv2/opencv.hpp>
#include <vector>

using namespace cv;
using namespace std;

struct Line {
    Point p1;
    Point p2;
    Vec4f lineParams; // For fitLine results
};

string toSGFCoord(int x_index, int y_index) {
    if (x_index < 0 || x_index >= 19 || y_index < 0 || y_index >= 19) {
        return "";
    }
    char col = 'a' + x_index;
    if (col >= 'i')
        col++; // Skip 'i'
    char row = 'a' + y_index;
    return string(1, col) + string(1, row);
}

vector<Point> detectBoardCorners(const Mat &image, Mat &processedImage) {
    Mat gray = image.clone();
    cvtColor(gray, gray, COLOR_BGR2GRAY);
    GaussianBlur(gray, gray, Size(5, 5), 0);

    vector<Vec4i> lines;
    HoughLinesP(gray, lines, 1, CV_PI / 180, 100, 50, 10); // Adjusted threshold

    vector<Vec4i> horizontalLines, verticalLines;
    for (const auto &line : lines) {
        float angle = atan2(line[3] - line[1], line[2] - line[0]);
        if (abs(angle) < CV_PI / 10) {
            horizontalLines.push_back(line);
        } else if (abs(abs(angle) - CV_PI / 2) < CV_PI / 10) {
            verticalLines.push_back(line);
        }
    }

    // Sort lines by their y or x coordinates
    sort(horizontalLines.begin(), horizontalLines.end(),
         [](const Vec4i &a, const Vec4i &b) {
             return (a[1] + a[3]) / 2 < (b[1] + b[3]) / 2;
         });
    sort(verticalLines.begin(), verticalLines.end(),
         [](const Vec4i &a, const Vec4i &b) {
             return (a[0] + a[2]) / 2 < (b[0] + b[2]) / 2;
         });

    Point2f intersections[4];
    bool foundAll = false;

    if (horizontalLines.size() >= 2 && verticalLines.size() >= 2) {
        // Take the topmost and bottommost horizontal lines, and leftmost and
        // rightmost vertical
        Vec4i topH = horizontalLines.front();
        Vec4i bottomH = horizontalLines.back();
        Vec4i leftV = verticalLines.front();
        Vec4i rightV = verticalLines.back();

        auto intersect = [](const Vec4i &l1, const Vec4i &l2) -> Point2f {
            float x1 = l1[0], y1 = l1[1], x2 = l1[2], y2 = l1[3];
            float x3 = l2[0], y3 = l2[1], x4 = l2[2], y4 = l2[3];
            float det = (x1 - x2) * (y3 - y4) - (y1 - y2) * (x3 - x4);
            if (abs(det) < 1e-6)
                return Point2f(-1, -1); // Parallel
            float t = ((x1 - x3) * (y3 - y4) - (y1 - y3) * (x3 - x4)) / det;
            float u = -((x1 - x2) * (y1 - y3) - (y1 - y2) * (x1 - x3)) / det;
            if (t >= 0 && t <= 1 && u >= 0 &&
                u <= 1) { // Intersection within line segments
                return Point2f(x1 + t * (x2 - x1), y1 + t * (y2 - y1));
            } else {
                return Point2f(x1 + t * (x2 - x1),
                               y1 + t * (y2 - y1)); // Lines extended
            }
        };

        intersections[0] = intersect(topH, leftV);     // Top-left
        intersections[1] = intersect(topH, rightV);    // Top-right
        intersections[2] = intersect(bottomH, rightV); // Bottom-right
        intersections[3] = intersect(bottomH, leftV);  // Bottom-left

        bool allValid = true;
        for (const auto &p : intersections) {
            if (p.x < 0 || p.y < 0 || p.x >= image.cols || p.y >= image.rows) {
                allValid = false;
                break;
            }
        }

        if (allValid) {
            foundAll = true;
        }
    }

    vector<Point> selected_corners;
    if (foundAll) {
        selected_corners = {Point(intersections[0]), Point(intersections[1]),
                            Point(intersections[2]), Point(intersections[3])};
    } else {
        cerr << "Error: Could not find reliable board corners using Hough lines." << endl;
    }

    processedImage = image.clone();
    for (const auto &p : selected_corners) {
        circle(processedImage, p, 10, Scalar(255, 0, 0), -1);
    }

    return selected_corners;
}

// In correctPerspective():
Mat correctPerspective(const Mat &image, const vector<Point> &corners) {
    if (corners.size() != 4) {
        cerr << "Error: Need 4 corners for perspective correction." << endl;
        return image.clone();
    }

    float side = 500.0;
    Point2f target_corners[4] = {Point2f(0, 0), Point2f(side - 1, 0),
                                 Point2f(side - 1, side - 1),
                                 Point2f(0, side - 1)};

    Point2f src_corners_array[4];
    for (int i = 0; i < 4; ++i) {
        src_corners_array[i] = Point2f(corners[i]);
    }

    // Wrap points in OpenCV Mat structures
    Mat src_points(1, 4, CV_32FC2, src_corners_array);
    Mat dst_points(1, 4, CV_32FC2, target_corners);

    Mat homography = findHomography(src_points, dst_points, 0); // Corrected call
    Mat warped_image;
    warpPerspective(image, warped_image, homography, Size(side, side));

    return warped_image;
}

void detectGridLines(const Mat &image, vector<Line> &horizontalLines,
                     vector<Line> &verticalLines) {
    Mat gray, blurred, edges;
    cvtColor(image, gray, COLOR_BGR2GRAY);
    GaussianBlur(gray, blurred, Size(5, 5), 0);
    Canny(blurred, edges, 50, 150);

    vector<Vec4i> lines;
    HoughLinesP(edges, lines, 1, CV_PI / 180, 100, 30, 10); // Adjusted parameters

    for (const auto &line : lines) {
        float dx = line[2] - line[0];
        float dy = line[3] - line[1];
        float angle = atan2(dy, dx) * 180 / CV_PI;
        if (abs(angle) < 15) { // Increased angle tolerance
            horizontalLines.push_back(
                    {Point(line[0], line[1]), Point(line[2], line[3])});
        } else if (abs(abs(angle) - 90) < 15) {
            verticalLines.push_back(
                    {Point(line[0], line[1]), Point(line[2], line[3])});
        }
    }
}

void identifyGrid(vector<Line> &horizontalLines, vector<Line> &verticalLines,
                  vector<Line> &selectedHorizontal,
                  vector<Line> &selectedVertical, const Mat &image) {
    // Sort lines
    sort(horizontalLines.begin(), horizontalLines.end(),
         [](const Line &a, const Line &b) {
             return (a.p1.y + a.p2.y) / 2 < (b.p1.y + b.p2.y) / 2;
         });
    sort(verticalLines.begin(), verticalLines.end(),
         [](const Line &a, const Line &b) {
             return (a.p1.x + a.p2.x) / 2 < (b.p1.x + b.p2.x) / 2;
         });

    // Select 19 lines with approximate spacing
    auto selectLines = [](vector<Line> &lines, bool horizontal, float imageSize) {
        vector<Line> selected;
        if (lines.empty())
            return selected;

        float avgSpacing = imageSize / 18.0f; // Average spacing for 19 lines
        selected.push_back(lines.front());

        for (size_t i = 1; i < lines.size(); ++i) {
            float pos = horizontal ? (lines[i].p1.y + lines[i].p2.y) / 2.0f
                                   : (lines[i].p1.x + lines[i].p2.x) / 2.0f;
            float lastPos =
                    horizontal ? (selected.back().p1.y + selected.back().p2.y) / 2.0f
                               : (selected.back().p1.x + selected.back().p2.x) / 2.0f;
            if (abs(pos - lastPos) >= avgSpacing * 0.7 && selected.size() < 19) {
                selected.push_back(lines[i]);
            }
            if (selected.size() >= 19)
                break; // Ensure we don't go over 19
        }

        // Extend lines to image boundaries
        if (horizontal) {
            for (auto &line : selected) {
                line.p1.x = 0;
                line.p2.x = imageSize - 1;
            }
        } else {
            for (auto &line : selected) {
                line.p1.y = 0;
                line.p2.y = imageSize - 1;
            }
        }

        return selected;
    };

    selectedHorizontal =
            selectLines(horizontalLines, true, static_cast<float>(image.rows));
    selectedVertical =
            selectLines(verticalLines, false, static_cast<float>(image.cols));
}

vector<vector<char>>
detectStonesAtIntersections(const Mat &image,
                            const vector<Line> &horizontalLines,
                            const vector<Line> &verticalLines) {
    int boardSize = 19;
    vector<vector<char>> boardState(boardSize, vector<char>(boardSize, '.'));

    if (horizontalLines.size() < boardSize || verticalLines.size() < boardSize) {
        cerr << "Not enough grid lines detected." << endl;
        return boardState;
    }

    for (int i = 0; i < boardSize; ++i) {
        for (int j = 0; j < boardSize; ++j) {
            Point2f intersection(
                    (verticalLines[j].p1.x + verticalLines[j].p2.x) / 2.0f,
                    (horizontalLines[i].p1.y + horizontalLines[i].p2.y) / 2.0f);

            if (intersection.x < 0 || intersection.x >= image.cols ||
                intersection.y < 0 || intersection.y >= image.rows)
                continue;

            // Check the center color
            Mat hsv;
            cvtColor(image, hsv, COLOR_BGR2HSV);
            Vec3b centerColor = hsv.at<Vec3b>(intersection.y, intersection.x);
            int v = centerColor[2];
            int s = centerColor[1];

            if (v < 50 && s > 50) { // Black stone
                boardState[i][j] = 'B';
            } else if (v > 200 && s < 50) { // White stone
                boardState[i][j] = 'W';
            }
        }
    }

    return boardState;
}

string boardStateToSGF(const vector<vector<char>> &boardState) {
    string sgfContent = "(;FF[4]GM[1]SZ[19]";
    string addBlack, addWhite;

    for (int y = 0; y < 19; ++y) {
        for (int x = 0; x < 19; ++x) {
            string coord = toSGFCoord(x, 18 - y); // Invert y-coordinate here
            if (!coord.empty()) {
                if (boardState[y][x] == 'B') {
                    addBlack += "[" + coord + "]";
                } else if (boardState[y][x] == 'W') {
                    addWhite += "[" + coord + "]";
                }
            }
        }
    }

    if (!addBlack.empty())
        sgfContent += "AB" + addBlack;
    if (!addWhite.empty())
        sgfContent += "AW" + addWhite;
    sgfContent += ")\n";
    return sgfContent;
}

int main() {
    string imageFile = "go_board.jpg";
    Mat originalImage = imread(imageFile, IMREAD_COLOR);
    if (originalImage.empty()) {
        cerr << "Error: Could not open image." << endl;
        return -1;
    }

    Mat processedImage;
    vector<Point> corners = detectBoardCorners(originalImage, processedImage);

    Mat warpedImage;
    if (corners.size() == 4) {
        warpedImage = correctPerspective(originalImage, corners);
        imshow("Warped", warpedImage);
    } else {
        warpedImage = originalImage.clone();
        cerr << "Perspective correction skipped." << endl;
    }

    vector<Line> horizontalLines, verticalLines;
    detectGridLines(warpedImage, horizontalLines, verticalLines);

    vector<Line> selectedHorz, selectedVert;
    identifyGrid(horizontalLines, verticalLines, selectedHorz, selectedVert, warpedImage);

    vector<vector<char>> board = detectStonesAtIntersections(warpedImage, selectedHorz, selectedVert);
    string sgf = boardStateToSGF(board);

    ofstream sgfFile("current_state.sgf");
    if (sgfFile.is_open()) {
        sgfFile << sgf;
        cout << "SGF saved." << endl;
    } else {
        cerr << "Failed to save SGF." << endl;
    }

    // Visualization
    Mat display = warpedImage.clone();
    for (const auto &line : selectedHorz)
        cv::line(display, line.p1, line.p2, Scalar(0, 255, 0), 1);
    for (const auto &line : selectedVert)
        cv::line(display, line.p1, line.p2, Scalar(0, 0, 255), 1);
    imshow("Grid", display);
    waitKey(0);

    return 0;
}

代码说明:

  1. 读取图像:程序从文件 go_board.jpg 中读取图像。
  2. 检测棋盘角点:使用 detectBoardCorners 函数检测棋盘的四个角点。
  3. 透视变换:如果成功检测到四个角点,则使用 correctPerspective 函数进行透视变换。
  4. 检测网格线:使用 detectGridLines 函数检测图像中的水平和垂直网格线。
  5. 识别网格:使用 identifyGrid 函数从检测到的网格线中选择 19 条水平和 19 条垂直网格线。
  6. 检测棋子:使用 detectStonesAtIntersections 函数检测网格线交点处的棋子。
  7. 生成 SGF 文件:使用 boardStateToSGF 函数将检测到的棋子状态转换为 SGF 格式,并保存到 current_state.sgf 文件中。
  8. 可视化:在图像上绘制检测到的网格线,并显示可视化结果。
请确保你已经安装了 OpenCV 库,并且 go_board.jpg 文件存在于当前工作目录中。
go.cpp
it doesn't change at all: ./doubao.exe Error: Could not find reliable board corners using Hough lines. Perspective correction skipped. SGF saved.
从你给出的错误信息 Error: Could not find reliable board corners using Hough lines. 可以知道,代码在利用霍夫变换检测棋盘角点时未能找到可靠的角点,进而跳过了透视校正环节。下面为你分析可能的原因以及对应的解决办法:

可能的原因

  1. 图像质量欠佳:图像存在模糊、光照不均或者噪声等问题,会对霍夫变换检测直线的效果产生影响。
  2. 霍夫变换参数不合适HoughLinesP 函数的参数设置可能不够恰当,无法准确检测到棋盘的直线。
  3. 棋盘特征不明显:棋盘的线条可能不够清晰,或者被其他物体遮挡,导致难以检测到直线。

解决办法

1. 提升图像质量

在进行霍夫变换之前,对图像进行预处理,以增强图像的对比度和清晰度。
cpp
运行
Mat enhanceImage(const Mat &image) {
    Mat enhanced;
    // 直方图均衡化增强对比度
    if (image.channels() == 1) {
        equalizeHist(image, enhanced);
    } else {
        Mat ycrcb;
        cvtColor(image, ycrcb, COLOR_BGR2YCrCb);
        vector<Mat> channels;
        split(ycrcb, channels);
        equalizeHist(channels[0], channels[0]);
        merge(channels, ycrcb);
        cvtColor(ycrcb, enhanced, COLOR_YCrCb2BGR);
    }
    return enhanced;
}
detectBoardCorners 函数中调用该函数:
cpp
运行
vector<Point> detectBoardCorners(const Mat &image, Mat &processedImage) {
    Mat enhanced = enhanceImage(image);
    Mat gray;
    cvtColor(enhanced, gray, COLOR_BGR2GRAY);
    GaussianBlur(gray, gray, Size(5, 5), 0);

    // 后续代码保持不变
    // ...
}

2. 调整霍夫变换参数

尝试调整 HoughLinesP 函数的参数,以提高直线检测的准确性。
cpp
运行
HoughLinesP(gray, lines, 1, CV_PI / 180, 50, 30, 10); // 调整阈值

3. 增加角点检测的鲁棒性

可以考虑使用其他角点检测算法,如 cv::goodFeaturesToTrack 或者 cv::findChessboardCorners,来辅助检测棋盘角点。
cpp
运行
vector<Point> detectBoardCorners(const Mat &image, Mat &processedImage) {
    // 原有的霍夫变换角点检测代码
    // ...

    // 尝试使用其他角点检测算法
    vector<Point2f> corners;
    bool found = findChessboardCorners(gray, Size(19, 19), corners);
    if (found) {
        // 提取四个角点
        vector<Point> selected_corners = {
            Point(corners[0]),
            Point(corners[18]),
            Point(corners[342]),
            Point(corners[360])
        };
        return selected_corners;
    }

    // 如果其他方法也失败,返回原有的角点检测结果
    return selected_corners;
}

完整修改后的代码示例

cpp
运行
#include <algorithm>
#include <cmath>
#include <fstream>
#include <iostream>
#include <opencv2/opencv.hpp>
#include <vector>

using namespace cv;
using namespace std;

struct Line {
    Point p1;
    Point p2;
    Vec4f lineParams; // For fitLine results
};

string toSGFCoord(int x_index, int y_index) {
    if (x_index < 0 || x_index >= 19 || y_index < 0 || y_index >= 19) {
        return "";
    }
    char col = 'a' + x_index;
    if (col >= 'i')
        col++; // Skip 'i'
    char row = 'a' + y_index;
    return string(1, col) + string(1, row);
}

Mat enhanceImage(const Mat &image) {
    Mat enhanced;
    if (image.channels() == 1) {
        equalizeHist(image, enhanced);
    } else {
        Mat ycrcb;
        cvtColor(image, ycrcb, COLOR_BGR2YCrCb);
        vector<Mat> channels;
        split(ycrcb, channels);
        equalizeHist(channels[0], channels[0]);
        merge(channels, ycrcb);
        cvtColor(ycrcb, enhanced, COLOR_YCrCb2BGR);
    }
    return enhanced;
}

vector<Point> detectBoardCorners(const Mat &image, Mat &processedImage) {
    Mat enhanced = enhanceImage(image);
    Mat gray;
    cvtColor(enhanced, gray, COLOR_BGR2GRAY);
    GaussianBlur(gray, gray, Size(5, 5), 0);

    vector<Vec4i> lines;
    HoughLinesP(gray, lines, 1, CV_PI / 180, 50, 30, 10); // Adjusted threshold

    vector<Vec4i> horizontalLines, verticalLines;
    for (const auto &line : lines) {
        float angle = atan2(line[3] - line[1], line[2] - line[0]);
        if (abs(angle) < CV_PI / 10) {
            horizontalLines.push_back(line);
        } else if (abs(abs(angle) - CV_PI / 2) < CV_PI / 10) {
            verticalLines.push_back(line);
        }
    }

    // Sort lines by their y or x coordinates
    sort(horizontalLines.begin(), horizontalLines.end(),
         [](const Vec4i &a, const Vec4i &b) {
             return (a[1] + a[3]) / 2 < (b[1] + b[3]) / 2;
         });
    sort(verticalLines.begin(), verticalLines.end(),
         [](const Vec4i &a, const Vec4i &b) {
             return (a[0] + a[2]) / 2 < (b[0] + b[2]) / 2;
         });

    Point2f intersections[4];
    bool foundAll = false;

    if (horizontalLines.size() >= 2 && verticalLines.size() >= 2) {
        // Take the topmost and bottommost horizontal lines, and leftmost and
        // rightmost vertical
        Vec4i topH = horizontalLines.front();
        Vec4i bottomH = horizontalLines.back();
        Vec4i leftV = verticalLines.front();
        Vec4i rightV = verticalLines.back();

        auto intersect = [](const Vec4i &l1, const Vec4i &l2) -> Point2f {
            float x1 = l1[0], y1 = l1[1], x2 = l1[2], y2 = l1[3];
            float x3 = l2[0], y3 = l2[1], x4 = l2[2], y4 = l2[3];
            float det = (x1 - x2) * (y3 - y4) - (y1 - y2) * (x3 - x4);
            if (abs(det) < 1e-6)
                return Point2f(-1, -1); // Parallel
            float t = ((x1 - x3) * (y3 - y4) - (y1 - y3) * (x3 - x4)) / det;
            float u = -((x1 - x2) * (y1 - y3) - (y1 - y2) * (x1 - x3)) / det;
            if (t >= 0 && t <= 1 && u >= 0 &&
                u <= 1) { // Intersection within line segments
                return Point2f(x1 + t * (x2 - x1), y1 + t * (y2 - y1));
            } else {
                return Point2f(x1 + t * (x2 - x1),
                               y1 + t * (y2 - y1)); // Lines extended
            }
        };

        intersections[0] = intersect(topH, leftV);     // Top-left
        intersections[1] = intersect(topH, rightV);    // Top-right
        intersections[2] = intersect(bottomH, rightV); // Bottom-right
        intersections[3] = intersect(bottomH, leftV);  // Bottom-left

        bool allValid = true;
        for (const auto &p : intersections) {
            if (p.x < 0 || p.y < 0 || p.x >= image.cols || p.y >= image.rows) {
                allValid = false;
                break;
            }
        }

        if (allValid) {
            foundAll = true;
        }
    }

    vector<Point> selected_corners;
    if (foundAll) {
        selected_corners = {Point(intersections[0]), Point(intersections[1]),
                            Point(intersections[2]), Point(intersections[3])};
    } else {
        // 尝试使用其他角点检测算法
        vector<Point2f> corners;
        bool found = findChessboardCorners(gray, Size(19, 19), corners);
        if (found) {
            selected_corners = {
                Point(corners[0]),
                Point(corners[18]),
                Point(corners[342]),
                Point(corners[360])
            };
        } else {
            cerr << "Error: Could not find reliable board corners using Hough lines." << endl;
        }
    }

    processedImage = image.clone();
    for (const auto &p : selected_corners) {
        circle(processedImage, p, 10, Scalar(255, 0, 0), -1);
    }

    return selected_corners;
}

// In correctPerspective():
Mat correctPerspective(const Mat &image, const vector<Point> &corners) {
    if (corners.size() != 4) {
        cerr << "Error: Need 4 corners for perspective correction." << endl;
        return image.clone();
    }

    float side = 500.0;
    Point2f target_corners[4] = {Point2f(0, 0), Point2f(side - 1, 0),
                                 Point2f(side - 1, side - 1),
                                 Point2f(0, side - 1)};

    Point2f src_corners_array[4];
    for (int i = 0; i < 4; ++i) {
        src_corners_array[i] = Point2f(corners[i]);
    }

    // Wrap points in OpenCV Mat structures
    Mat src_points(1, 4, CV_32FC2, src_corners_array);
    Mat dst_points(1, 4, CV_32FC2, target_corners);

    Mat homography = findHomography(src_points, dst_points, 0); // Corrected call
    Mat warped_image;
    warpPerspective(image, warped_image, homography, Size(side, side));

    return warped_image;
}

void detectGridLines(const Mat &image, vector<Line> &horizontalLines,
                     vector<Line> &verticalLines) {
    Mat gray, blurred, edges;
    cvtColor(image, gray, COLOR_BGR2GRAY);
    GaussianBlur(gray, blurred, Size(5, 5), 0);
    Canny(blurred, edges, 50, 150);

    vector<Vec4i> lines;
    HoughLinesP(edges, lines, 1, CV_PI / 180, 100, 30, 10); // Adjusted parameters

    for (const auto &line : lines) {
        float dx = line[2] - line[0];
        float dy = line[3] - line[1];
        float angle = atan2(dy, dx) * 180 / CV_PI;
        if (abs(angle) < 15) { // Increased angle tolerance
            horizontalLines.push_back(
                    {Point(line[0], line[1]), Point(line[2], line[3])});
        } else if (abs(abs(angle) - 90) < 15) {
            verticalLines.push_back(
                    {Point(line[0], line[1]), Point(line[2], line[3])});
        }
    }
}

void identifyGrid(vector<Line> &horizontalLines, vector<Line> &verticalLines,
                  vector<Line> &selectedHorizontal,
                  vector<Line> &selectedVertical, const Mat &image) {
    // Sort lines
    sort(horizontalLines.begin(), horizontalLines.end(),
         [](const Line &a, const Line &b) {
             return (a.p1.y + a.p2.y) / 2 < (b.p1.y + b.p2.y) / 2;
         });
    sort(verticalLines.begin(), verticalLines.end(),
         [](const Line &a, const Line &b) {
             return (a.p1.x + a.p2.x) / 2 < (b.p1.x + b.p2.x) / 2;
         });

    // Select 19 lines with approximate spacing
    auto selectLines = [](vector<Line> &lines, bool horizontal, float imageSize) {
        vector<Line> selected;
        if (lines.empty())
            return selected;

        float avgSpacing = imageSize / 18.0f; // Average spacing for 19 lines
        selected.push_back(lines.front());

        for (size_t i = 1; i < lines.size(); ++i) {
            float pos = horizontal ? (lines[i].p1.y + lines[i].p2.y) / 2.0f
                                   : (lines[i].p1.x + lines[i].p2.x) / 2.0f;
            float lastPos =
                    horizontal ? (selected.back().p1.y + selected.back().p2.y) / 2.0f
                               : (selected.back().p1.x + selected.back().p2.x) / 2.0f;
            if (abs(pos - lastPos) >= avgSpacing * 0.7 && selected.size() < 19) {
                selected.push_back(lines[i]);
            }
            if (selected.size() >= 19)
                break; // Ensure we don't go over 19
        }

        // Extend lines to image boundaries
        if (horizontal) {
            for (auto &line : selected) {
                line.p1.x = 0;
                line.p2.x = imageSize - 1;
            }
        } else {
            for (auto &line : selected) {
                line.p1.y = 0;
                line.p2.y = imageSize - 1;
            }
        }

        return selected;
    };

    selectedHorizontal =
            selectLines(horizontalLines, true, static_cast<float>(image.rows));
    selectedVertical =
            selectLines(verticalLines, false, static_cast<float>(image.cols));
}

vector<vector<char>>
detectStonesAtIntersections(const Mat &image,
                            const vector<Line> &horizontalLines,
                            const vector<Line> &verticalLines) {
    int boardSize = 19;
    vector<vector<char>> boardState(boardSize, vector<char>(boardSize, '.'));

    if (horizontalLines.size() < boardSize || verticalLines.size() < boardSize) {
        cerr << "Not enough grid lines detected." << endl;
        return boardState;
    }

    for (int i = 0; i < boardSize; ++i) {
        for (int j = 0; j < boardSize; ++j) {
            Point2f intersection(
                    (verticalLines[j].p1.x + verticalLines[j].p2.x) / 2.0f,
                    (horizontalLines[i].p1.y + horizontalLines[i].p2.y) / 2.0f);

            if (intersection.x < 0 || intersection.x >= image.cols ||
                intersection.y < 0 || intersection.y >= image.rows)
                continue;

            // Check the center color
            Mat hsv;
            cvtColor(image, hsv, COLOR_BGR2HSV);
            Vec3b centerColor = hsv.at<Vec3b>(intersection.y, intersection.x);
            int v = centerColor[2];
            int s = centerColor[1];

            if (v < 50 && s > 50) { // Black stone
                boardState[i][j] = 'B';
            } else if (v > 200 && s < 50) { // White stone
                boardState[i][j] = 'W';
            }
        }
    }

    return boardState;
}

string boardStateToSGF(const vector<vector<char>> &boardState) {
    string sgfContent = "(;FF[4]GM[1]SZ[19]";
    string addBlack, addWhite;
go.cpp
i changed a image which should be much easier for you to detect board corner, still you give error like $ ./doubao.exe Error: Could not find reliable board corners using Hough lines. Perspective correction skipped. SGF saved.
虽然更换了图像,理论上更利于检测棋盘角点,但仍然出现找不到可靠角点的错误,这可能是由多种原因导致的。以下是一些可能的问题及对应的解决办法:

可能的问题及解决办法

1. 图像预处理效果不佳

图像预处理步骤(如灰度化、高斯模糊)可能没有充分增强棋盘线条的特征,导致霍夫变换无法准确检测到直线。
解决办法
可以尝试增强对比度,例如使用直方图均衡化。在 detectBoardCorners 函数中,在灰度化之后添加直方图均衡化步骤。
cpp
运行
vector<Point> detectBoardCorners(const Mat &image, Mat &processedImage) {
    Mat gray;
    cvtColor(image, gray, COLOR_BGR2GRAY);
    // 添加直方图均衡化
    equalizeHist(gray, gray); 
    GaussianBlur(gray, gray, Size(5, 5), 0);

    // 后续代码保持不变
    // ...
}

2. 霍夫变换参数不合适

霍夫变换的参数(如阈值)可能设置得过于严格,导致无法检测到棋盘的直线。
解决办法
尝试调整 HoughLinesP 函数的参数,降低阈值以检测更多的直线。
cpp
运行
vector<Vec4i> lines;
// 降低阈值
HoughLinesP(gray, lines, 1, CV_PI / 180, 50, 30, 10); 

3. 棋盘特征不明显

即使图像看起来更清晰,但棋盘的线条可能仍然不够明显,或者存在干扰因素。
解决办法
可以尝试使用更复杂的边缘检测算法,如 LaplacianSobel 算子,以增强棋盘线条的特征。
cpp
运行
vector<Point> detectBoardCorners(const Mat &image, Mat &processedImage) {
    Mat gray;
    cvtColor(image, gray, COLOR_BGR2GRAY);
    equalizeHist(gray, gray);
    GaussianBlur(gray, gray, Size(5, 5), 0);

    // 使用 Sobel 算子进行边缘检测
    Mat edges;
    Sobel(gray, edges, CV_8U, 1, 1);
    threshold(edges, edges, 100, 255, THRESH_BINARY);

    vector<Vec4i> lines;
    HoughLinesP(edges, lines, 1, CV_PI / 180, 50, 30, 10);

    // 后续代码保持不变
    // ...
}

4. 角点检测逻辑不完善

当前的角点检测逻辑可能过于依赖霍夫变换检测到的直线,如果检测到的直线不足,就无法找到可靠的角点。
解决办法
可以尝试使用其他角点检测算法,如 cv::goodFeaturesToTrackcv::findChessboardCorners 作为备用方法。
cpp
运行
vector<Point> detectBoardCorners(const Mat &image, Mat &processedImage) {
    Mat gray;
    cvtColor(image, gray, COLOR_BGR2GRAY);
    equalizeHist(gray, gray);
    GaussianBlur(gray, gray, Size(5, 5), 0);

    vector<Vec4i> lines;
    HoughLinesP(gray, lines, 1, CV_PI / 180, 50, 30, 10);

    // 原有的霍夫变换角点检测代码
    // ...

    // 如果霍夫变换未能找到可靠角点,尝试使用其他方法
    vector<Point2f> corners;
    bool found = findChessboardCorners(gray, Size(19, 19), corners);
    if (found) {
        vector<Point> selected_corners = {
            Point(corners[0]),
            Point(corners[18]),
            Point(corners[342]),
            Point(corners[360])
        };
        return selected_corners;
    }

    // 如果其他方法也失败,返回空向量
    return {};
}

总结

通过上述的修改,可以增强图像预处理效果,调整霍夫变换参数,使用更复杂的边缘检测算法,并添加备用的角点检测方法,从而提高角点检测的准确性。你可以根据实际情况选择合适的方法进行尝试。
go.cpp
can you give me complete code of function detectBoardCorners
Here is the complete code of the detectBoardCorners function with some potential improvements based on the analysis in the previous response. This code includes histogram equalization, adjusted Hough transform parameters, and an alternative corner detection method using findChessboardCorners as a fallback.
cpp
运行
#include <algorithm>
#include <cmath>
#include <iostream>
#include <opencv2/opencv.hpp>
#include <vector>

using namespace cv;
using namespace std;

vector<Point> detectBoardCorners(const Mat &image, Mat &processedImage) {
    Mat gray;
    cvtColor(image, gray, COLOR_BGR2GRAY);
    // Add histogram equalization to enhance contrast
    equalizeHist(gray, gray); 
    GaussianBlur(gray, gray, Size(5, 5), 0);

    vector<Vec4i> lines;
    // Adjusted HoughLinesP parameters to detect more lines
    HoughLinesP(gray, lines, 1, CV_PI / 180, 50, 30, 10); 

    vector<Vec4i> horizontalLines, verticalLines;
    for (const auto &line : lines) {
        float angle = atan2(line[3] - line[1], line[2] - line[0]);
        if (abs(angle) < CV_PI / 10) {
            horizontalLines.push_back(line);
        } else if (abs(abs(angle) - CV_PI / 2) < CV_PI / 10) {
            verticalLines.push_back(line);
        }
    }

    // Sort lines by their y or x coordinates
    sort(horizontalLines.begin(), horizontalLines.end(),
         [](const Vec4i &a, const Vec4i &b) {
             return (a[1] + a[3]) / 2 < (b[1] + b[3]) / 2;
         });
    sort(verticalLines.begin(), verticalLines.end(),
         [](const Vec4i &a, const Vec4i &b) {
             return (a[0] + a[2]) / 2 < (b[0] + b[2]) / 2;
         });

    Point2f intersections[4];
    bool foundAll = false;

    if (horizontalLines.size() >= 2 && verticalLines.size() >= 2) {
        // Take the topmost and bottommost horizontal lines, and leftmost and
        // rightmost vertical
        Vec4i topH = horizontalLines.front();
        Vec4i bottomH = horizontalLines.back();
        Vec4i leftV = verticalLines.front();
        Vec4i rightV = verticalLines.back();

        auto intersect = [](const Vec4i &l1, const Vec4i &l2) -> Point2f {
            float x1 = l1[0], y1 = l1[1], x2 = l1[2], y2 = l1[3];
            float x3 = l2[0], y3 = l2[1], x4 = l2[2], y4 = l2[3];
            float det = (x1 - x2) * (y3 - y4) - (y1 - y2) * (x3 - x4);
            if (abs(det) < 1e-6)
                return Point2f(-1, -1); // Parallel
            float t = ((x1 - x3) * (y3 - y4) - (y1 - y3) * (x3 - x4)) / det;
            float u = -((x1 - x2) * (y1 - y3) - (y1 - y2) * (x1 - x3)) / det;
            if (t >= 0 && t <= 1 && u >= 0 &&
                u <= 1) { // Intersection within line segments
                return Point2f(x1 + t * (x2 - x1), y1 + t * (y2 - y1));
            } else {
                return Point2f(x1 + t * (x2 - x1),
                               y1 + t * (y2 - y1)); // Lines extended
            }
        };

        intersections[0] = intersect(topH, leftV);     // Top-left
        intersections[1] = intersect(topH, rightV);    // Top-right
        intersections[2] = intersect(bottomH, rightV); // Bottom-right
        intersections[3] = intersect(bottomH, leftV);  // Bottom-left

        bool allValid = true;
        for (const auto &p : intersections) {
            if (p.x < 0 || p.y < 0 || p.x >= image.cols || p.y >= image.rows) {
                allValid = false;
                break;
            }
        }

        if (allValid) {
            foundAll = true;
        }
    }

    vector<Point> selected_corners;
    if (foundAll) {
        selected_corners = {Point(intersections[0]), Point(intersections[1]),
                            Point(intersections[2]), Point(intersections[3])};
    } else {
        // Try alternative corner detection method
        vector<Point2f> corners;
        bool found = findChessboardCorners(gray, Size(19, 19), corners);
        if (found) {
            selected_corners = {
                Point(corners[0]),
                Point(corners[18]),
                Point(corners[342]),
                Point(corners[360])
            };
        } else {
            cerr << "Error: Could not find reliable board corners using Hough lines." << endl;
        }
    }

    processedImage = image.clone();
    for (const auto &p : selected_corners) {
        circle(processedImage, p, 10, Scalar(255, 0, 0), -1);
    }

    return selected_corners;
}
You can use this function in your main program as follows:
cpp
运行
int main() {
    string imageFile = "go_board.jpg";
    Mat originalImage = imread(imageFile, IMREAD_COLOR);
    if (originalImage.empty()) {
        cerr << "Error: Could not open image." << endl;
        return -1;
    }

    Mat processedImage;
    vector<Point> corners = detectBoardCorners(originalImage, processedImage);

    // Further processing...

    return 0;
}
This code first tries to detect the board corners using the Hough transform. If it fails, it attempts to use findChessboardCorners as an alternative method. If both methods fail, it prints an error message.
go.cpp
your grid line seems correct and still you complain about corner detection not reliable
It's possible that the grid lines are detected correctly, but the corner detection fails because the algorithms for these two tasks have different requirements and sensitivities. Here are some possible reasons and solutions to improve the corner detection:

Possible Reasons

1. Hough Transform Parameters

The parameters used in the HoughLinesP function for corner detection might be too strict. This can lead to missing some important lines, making it difficult to find the reliable intersections (corners).

2. Image Pre - processing

The pre - processing steps such as grayscale conversion and Gaussian blur might not be sufficient to enhance the edges of the board for accurate corner detection. There could be noise or uneven lighting that affects the line detection.

3. Perspective Distortion

If the image is taken at an angle, perspective distortion can cause the lines to appear skewed, making it harder to accurately detect the corners as the intersections of the horizontal and vertical lines.

4. Background Interference

The background of the image might contain lines or patterns that are detected as part of the Hough lines, which can interfere with the identification of the actual board lines and corners.

Solutions

1. Adjust Hough Transform Parameters

Try reducing the threshold values in the HoughLinesP function to detect more lines. For example:
cpp
运行
vector<Vec4i> lines;
// Reduce the threshold from 100 to 50
HoughLinesP(gray, lines, 1, CV_PI / 180, 50, 30, 10); 

2. Improve Image Pre - processing

Add more pre - processing steps such as histogram equalization to enhance the contrast of the image:
cpp
运行
Mat gray = image.clone();
cvtColor(gray, gray, COLOR_BGR2GRAY);
// Add histogram equalization
equalizeHist(gray, gray); 
GaussianBlur(gray, gray, Size(5, 5), 0);

3. Correct Perspective Early

If possible, try to correct the perspective of the image before performing corner detection. You can use some rough estimation methods to get an initial set of corners and then perform a rough perspective correction.

4. Filter Out Background Interference

You can use morphological operations such as erosion and dilation to remove small noise and isolate the board area before performing line detection.
Here is the updated detectBoardCorners function incorporating the above improvements:
cpp
运行
vector<Point> detectBoardCorners(const Mat &image, Mat &processedImage) {
    Mat gray = image.clone();
    cvtColor(gray, gray, COLOR_BGR2GRAY);
    // Add histogram equalization
    equalizeHist(gray, gray); 
    GaussianBlur(gray, gray, Size(5, 5), 0);

    vector<Vec4i> lines;
    // Reduce the threshold
    HoughLinesP(gray, lines, 1, CV_PI / 180, 50, 30, 10); 

    vector<Vec4i> horizontalLines, verticalLines;
    for (const auto &line : lines) {
        float angle = atan2(line[3] - line[1], line[2] - line[0]);
        if (abs(angle) < CV_PI / 10) {
            horizontalLines.push_back(line);
        } else if (abs(abs(angle) - CV_PI / 2) < CV_PI / 10) {
            verticalLines.push_back(line);
        }
    }

    // Sort lines by their y or x coordinates
    sort(horizontalLines.begin(), horizontalLines.end(),
         [](const Vec4i &a, const Vec4i &b) {
             return (a[1] + a[3]) / 2 < (b[1] + b[3]) / 2;
         });
    sort(verticalLines.begin(), verticalLines.end(),
         [](const Vec4i &a, const Vec4i &b) {
             return (a[0] + a[2]) / 2 < (b[0] + b[2]) / 2;
         });

    Point2f intersections[4];
    bool foundAll = false;

    if (horizontalLines.size() >= 2 && verticalLines.size() >= 2) {
        // Take the topmost and bottommost horizontal lines, and leftmost and
        // rightmost vertical
        Vec4i topH = horizontalLines.front();
        Vec4i bottomH = horizontalLines.back();
        Vec4i leftV = verticalLines.front();
        Vec4i rightV = verticalLines.back();

        auto intersect = [](const Vec4i &l1, const Vec4i &l2) -> Point2f {
            float x1 = l1[0], y1 = l1[1], x2 = l1[2], y2 = l1[3];
            float x3 = l2[0], y3 = l2[1], x4 = l2[2], y4 = l2[3];
            float det = (x1 - x2) * (y3 - y4) - (y1 - y2) * (x3 - x4);
            if (abs(det) < 1e-6)
                return Point2f(-1, -1); // Parallel
            float t = ((x1 - x3) * (y3 - y4) - (y1 - y3) * (x3 - x4)) / det;
            float u = -((x1 - x2) * (y1 - y3) - (y1 - y2) * (x1 - x3)) / det;
            if (t >= 0 && t <= 1 && u >= 0 &&
                u <= 1) { // Intersection within line segments
                return Point2f(x1 + t * (x2 - x1), y1 + t * (y2 - y1));
            } else {
                return Point2f(x1 + t * (x2 - x1),
                               y1 + t * (y2 - y1)); // Lines extended
            }
        };

        intersections[0] = intersect(topH, leftV);     // Top-left
        intersections[1] = intersect(topH, rightV);    // Top-right
        intersections[2] = intersect(bottomH, rightV); // Bottom-right
        intersections[3] = intersect(bottomH, leftV);  // Bottom-left

        bool allValid = true;
        for (const auto &p : intersections) {
            if (p.x < 0 || p.y < 0 || p.x >= image.cols || p.y >= image.rows) {
                allValid = false;
                break;
            }
        }

        if (allValid) {
            foundAll = true;
        }
    }

    vector<Point> selected_corners;
    if (foundAll) {
        selected_corners = {Point(intersections[0]), Point(intersections[1]),
                            Point(intersections[2]), Point(intersections[3])};
    } else {
        cerr << "Error: Could not find reliable board corners using Hough lines."
             << endl;
    }

    processedImage = image.clone();
    for (const auto &p : selected_corners) {
        circle(processedImage, p, 10, Scalar(255, 0, 0), -1);
    }

    return selected_corners;
}
You can replace the original detectBoardCorners function in your go.cpp file with this updated version to see if it improves the corner detection.
$ ./doubao.exe Error: Could not find reliable board corners using Hough lines. Perspective correction skipped. SGF saved.
看起来即便对代码进行了调整,角点检测依旧失败。下面我们进一步分析可能的原因并给出更多的解决办法。

可能的原因

1. 图像噪声和光照问题

图像里的噪声或者光照不均,会对边缘检测和直线检测的效果产生影响,使得霍夫变换难以检测到棋盘的边缘。

2. 霍夫变换参数设置不合理

霍夫变换的参数(像阈值、最小线长、最大线间隙等)可能设置得不合适,导致无法检测到足够的直线来确定角点。

3. 棋盘特征不明显

棋盘的线条可能不够清晰,或者被其他物体遮挡,使得角点难以被检测到。

4. 角点检测逻辑不完善

当前的角点检测逻辑也许过于依赖霍夫变换检测到的直线,要是检测到的直线不足,就无法找到可靠的角点。

解决办法

1. 增强图像预处理

在进行霍夫变换之前,对图像进行更复杂的预处理,以减少噪声、增强对比度和突出边缘。
cpp
运行
Mat preprocessImage(const Mat &image) {
    Mat gray;
    cvtColor(image, gray, COLOR_BGR2GRAY);
    // 直方图均衡化增强对比度
    equalizeHist(gray, gray); 
    // 高斯模糊减少噪声
    GaussianBlur(gray, gray, Size(5, 5), 0); 
    // 使用双边滤波进一步平滑图像
    bilateralFilter(gray, gray, 9, 75, 75); 
    return gray;
}

2. 调整霍夫变换参数

尝试不同的霍夫变换参数,以找到最适合当前图像的设置。
cpp
运行
vector<Vec4i> detectLines(const Mat &image) {
    vector<Vec4i> lines;
    // 调整霍夫变换参数
    HoughLinesP(image, lines, 1, CV_PI / 180, 30, 30, 10); 
    return lines;
}

3. 优化角点检测逻辑

使用更复杂的角点检测算法,或者结合多种方法来提高角点检测的准确性。
cpp
运行
vector<Point> detectBoardCorners(const Mat &image, Mat &processedImage) {
    Mat preprocessed = preprocessImage(image);
    vector<Vec4i> lines = detectLines(preprocessed);

    vector<Vec4i> horizontalLines, verticalLines;
    for (const auto &line : lines) {
        float angle = atan2(line[3] - line[1], line[2] - line[0]);
        if (abs(angle) < CV_PI / 10) {
            horizontalLines.push_back(line);
        } else if (abs(abs(angle) - CV_PI / 2) < CV_PI / 10) {
            verticalLines.push_back(line);
        }
    }

    // 对检测到的直线进行排序
    sort(horizontalLines.begin(), horizontalLines.end(),
         [](const Vec4i &a, const Vec4i &b) {
             return (a[1] + a[3]) / 2 < (b[1] + b[3]) / 2;
         });
    sort(verticalLines.begin(), verticalLines.end(),
         [](const Vec4i &a, const Vec4i &b) {
             return (a[0] + a[2]) / 2 < (b[0] + b[2]) / 2;
         });

    Point2f intersections[4];
    bool foundAll = false;

    if (horizontalLines.size() >= 2 && verticalLines.size() >= 2) {
        // 选取最顶部、最底部的水平线和最左侧、最右侧的垂直线
        Vec4i topH = horizontalLines.front();
        Vec4i bottomH = horizontalLines.back();
        Vec4i leftV = verticalLines.front();
        Vec4i rightV = verticalLines.back();

        auto intersect = [](const Vec4i &l1, const Vec4i &l2) -> Point2f {
            float x1 = l1[0], y1 = l1[1], x2 = l1[2], y2 = l1[3];
            float x3 = l2[0], y3 = l2[1], x4 = l2[2], y4 = l2[3];
            float det = (x1 - x2) * (y3 - y4) - (y1 - y2) * (x3 - x4);
            if (abs(det) < 1e-6)
                return Point2f(-1, -1); // 平行直线
            float t = ((x1 - x3) * (y3 - y4) - (y1 - y3) * (x3 - x4)) / det;
            float u = -((x1 - x2) * (y1 - y3) - (y1 - y2) * (x1 - x3)) / det;
            if (t >= 0 && t <= 1 && u >= 0 && u <= 1) { // 线段相交
                return Point2f(x1 + t * (x2 - x1), y1 + t * (y2 - y1));
            } else {
                return Point2f(x1 + t * (x2 - x1), y1 + t * (y2 - y1)); // 延长线相交
            }
        };

        intersections[0] = intersect(topH, leftV);     // 左上角
        intersections[1] = intersect(topH, rightV);    // 右上角
        intersections[2] = intersect(bottomH, rightV); // 右下角
        intersections[3] = intersect(bottomH, leftV);  // 左下角

        bool allValid = true;
        for (const auto &p : intersections) {
            if (p.x < 0 || p.y < 0 || p.x >= image.cols || p.y >= image.rows) {
                allValid = false;
                break;
            }
        }

        if (allValid) {
            foundAll = true;
        }
    }

    vector<Point> selected_corners;
    if (foundAll) {
        selected_corners = {Point(intersections[0]), Point(intersections[1]),
                            Point(intersections[2]), Point(intersections[3])};
    } else {
        // 尝试使用其他角点检测算法
        vector<Point2f> corners;
        bool found = findChessboardCorners(preprocessed, Size(19, 19), corners);
        if (found) {
            selected_corners = {
                Point(corners[0]),
                Point(corners[18]),
                Point(corners[342]),
                Point(corners[360])
            };
        } else {
            cerr << "Error: Could not find reliable board corners using Hough lines." << endl;
        }
    }

    processedImage = image.clone();
    for (const auto &p : selected_corners) {
        circle(processedImage, p, 10, Scalar(255, 0, 0), -1);
    }

    return selected_corners;
}

完整代码示例

cpp
运行
#include <algorithm>
#include <cmath>
#include <fstream>
#include <iostream>
#include <opencv2/opencv.hpp>
#include <vector>

using namespace cv;
using namespace std;

struct Line {
    Point p1;
    Point p2;
    Vec4f lineParams; // For fitLine results
};

string toSGFCoord(int x_index, int y_index) {
    if (x_index < 0 || x_index >= 19 || y_index < 0 || y_index >= 19) {
        return "";
    }
    char col = 'a' + x_index;
    if (col >= 'i')
        col++; // Skip 'i'
    char row = 'a' + y_index;
    return string(1, col) + string(1, row);
}

Mat preprocessImage(const Mat &image) {
    Mat gray;
    cvtColor(image, gray, COLOR_BGR2GRAY);
    equalizeHist(gray, gray);
    GaussianBlur(gray, gray, Size(5, 5), 0);
    bilateralFilter(gray, gray, 9, 75, 75);
    return gray;
}

vector<Vec4i> detectLines(const Mat &image) {
    vector<Vec4i> lines;
    HoughLinesP(image, lines, 1, CV_PI / 180, 30, 30, 10);
    return lines;
}

vector<Point> detectBoardCorners(const Mat &image, Mat &processedImage) {
    Mat preprocessed = preprocessImage(image);
    vector<Vec4i> lines = detectLines(preprocessed);

    vector<Vec4i> horizontalLines, verticalLines;
    for (const auto &line : lines) {
        float angle = atan2(line[3] - line[1], line[2] - line[0]);
        if (abs(angle) < CV_PI / 10) {
            horizontalLines.push_back(line);
        } else if (abs(abs(angle) - CV_PI / 2) < CV_PI / 10) {
            verticalLines.push_back(line);
        }
    }

    sort(horizontalLines.begin(), horizontalLines.end(),
         [](const Vec4i &a, const Vec4i &b) {
             return (a[1] + a[3]) / 2 < (b[1] + b[3]) / 2;
         });
    sort(verticalLines.begin(), verticalLines.end(),
         [](const Vec4i &a, const Vec4i &b) {
             return (a[0] + a[2]) / 2 < (b[0] + b[2]) / 2;
         });

    Point2f intersections[4];
    bool foundAll = false;

    if (horizontalLines.size() >= 2 && verticalLines.size() >= 2) {
        Vec4i topH = horizontalLines.front();
        Vec4i bottomH = horizontalLines.back();
        Vec4i leftV = verticalLines.front();
        Vec4i rightV = verticalLines.back();

        auto intersect = [](const Vec4i &l1, const Vec4i &l2) -> Point2f {
            float x1 = l1[0], y1 = l1[1], x2 = l1[2], y2 = l1[3];
            float x3 = l2[0], y3 = l2[1], x4 = l2[2], y4 = l2[3];
            float det = (x1 - x2) * (y3 - y4) - (y1 - y2) * (x3 - x4);
            if (abs(det) < 1e-6)
                return Point2f(-1, -1);
            float t = ((x1 - x3) * (y3 - y4) - (y1 - y3) * (x3 - x4)) / det;
            float u = -((x1 - x2) * (y1 - y3) - (y1 - y2) * (x1 - x3)) / det;
            if (t >= 0 && t <= 1 && u >= 0 && u <= 1) {
                return Point2f(x1 + t * (x2 - x1), y1 + t * (y2 - y1));
            } else {
                return Point2f(x1 + t * (x2 - x1), y1 + t * (y2 - y1));
            }
        };

        intersections[0] = intersect(topH, leftV);
        intersections[1] = intersect(topH, rightV);
        intersections[2] = intersect(bottomH, rightV);
        intersections[3] = intersect(bottomH, leftV);

        bool allValid = true;
        for (const auto &p : intersections) {
            if (p.x < 0 || p.y < 0 || p.x >= image.cols || p.y >= image.rows) {
                allValid = false;
                break;
            }
        }

        if (allValid) {
            foundAll = true;
        }
    }

    vector<Point> selected_corners;
    if (foundAll) {
        selected_corners = {Point(intersections[0]), Point(intersections[1]),
                            Point(intersections[2]), Point(intersections[3])};
    } else {
        vector<Point2f> corners;
        bool found = findChessboardCorners(preprocessed, Size(19, 19), corners);
        if (found) {
            selected_corners = {
                Point(corners[0]),
                Point(corners[18]),
                Point(corners[342]),
                Point(corners[360])
            };
        } else {
            cerr << "Error: Could not find reliable board corners using Hough lines." << endl;
        }
    }

    processedImage = image.clone();
    for (const auto &p : selected_corners) {
        circle(processedImage, p, 10, Scalar(255, 0, 0), -1);
    }

    return selected_corners;
}

// In correctPerspective():
Mat correctPerspective(const Mat &image, const vector<Point> &corners) {
    if (corners.size() != 4) {
        cerr << "Error: Need 4 corners for perspective correction." << endl;
        return image.clone();
    }

    float side = 500.0;
    Point2f target_corners[4] = {Point2f(0, 0), Point2f(side - 1, 0),
                                 Point2f(side - 1, side - 1),
                                 Point2f(0, side - 1)};

    Point2f src_corners_array[4];
    for (int i = 0; i < 4; ++i) {
        src_corners_array[i] = Point2f(corners[i]);
    }

    // Wrap points in OpenCV Mat structures
    Mat src_points(1, 4, CV_32FC2, src_corners_array);
    Mat dst_points(1, 4, CV_32FC2, target_corners);

    Mat homography = findHomography(src_points, dst_points, 0); // Corrected call
    Mat warped_image;
    warpPerspective(image, warped_image, homography, Size(side, side));

    return warped_image;
}

void detectGridLines(const Mat &image, vector<Line> &horizontalLines,
                     vector<Line> &verticalLines) {
    Mat gray, blurred, edges;
    cvtColor(image, gray, COLOR_BGR2GRAY);
    GaussianBlur(gray, blurred, Size(5, 5), 0);
    Canny(blurred, edges, 50, 150);

    vector<Vec4i> lines;
    HoughLinesP(edges, lines, 1, CV_PI / 180, 100, 30, 10); // Adjusted parameters

    for (const auto &line : lines) {
        float dx = line[2] - line[0];
        float dy = line[3] - line[1];
        float angle = atan2(dy, dx) * 180 / CV_PI;
        if (abs(angle) < 15) { // Increased angle tolerance
            horizontalLines.push_back(
                    {Point(line[0], line[1]), Point(line[2], line[3])});
        } else if (abs(abs(angle) - 90) < 15) {
            verticalLines.push_back(
                    {Point(line[0], line[1]), Point(line[2], line[3])});
        }
    }
}

void identifyGrid(vector<Line> &horizontalLines, vector<Line> &verticalLines,
                  vector<Line> &selectedHorizontal,
                  vector<Line> &selectedVertical, const Mat &image) {
    // Sort lines
    sort(horizontalLines.begin(), horizontalLines.end(),
         [](const Line &a, const Line &b) {
             return (a.p1.y + a.p2.y) / 2 < (b.p1.y + b.p2.y) / 2;
         });
    sort(verticalLines.begin(), verticalLines.end(),
         [](const Line &a, const Line &b) {
             return (a.p1.x + a.p2.x) / 2 < (b.p1.x + b.p2.x) / 2;
         });

    // Select 19 lines with approximate spacing
    auto selectLines = [](vector<Line> &lines, bool horizontal, float imageSize) {
        vector<Line> selected;
        if (lines.empty())
            return selected;

        float avgSpacing = imageSize / 18.0f; // Average spacing for 19 lines
        selected.push_back(lines.front());

        for (size_t i = 1; i < lines.size(); ++i) {
            float pos = horizontal ? (lines[i].p1.y + lines[i].p2.y) / 2.0f
                                   : (lines[i].p1.x + lines[i].p2.x) / 2.0f;
            float lastPos =
                    horizontal ? (selected.back().p1.y + selected.back().p2.y) / 2.0f
                               : (selected.back().p1.x + selected.back().p2.x) / 2.0f;
            if (abs(pos - lastPos) >= avgSpacing * 0.7 && selected.size() < 19) {
                selected.push_back(lines[i]);
            }
            if (selected.size() >= 19)
                break; // Ensure we don't go over 19
        }

        // Extend lines to image boundaries
        if (horizontal) {
            for (auto &line : selected) {
                line.p1.x = 0;
                line.p2.x = imageSize - 1;
            }
        } else {
            for (auto &line : selected) {
                line.p1.y = 0;
                line.p2.y = imageSize - 1;
            }
        }

        return selected;
    };

    selectedHorizontal =
            selectLines(horizontalLines, true, static_cast<float>(image.rows));
    selectedVertical =
            selectLines(verticalLines, false, static_cast<float>(image.cols));
}

vector<vector<char>>
detectStonesAtIntersections(const Mat &image,
                            const vector<Line> &horizontalLines,
                            const vector<Line> &verticalLines) {
    int boardSize = 19;
    vector<vector<char>> boardState(boardSize, vector<char>(boardSize, '.'));

    if (horizontalLines.size() < boardSize || verticalLines.size() < boardSize) {
        cerr << "Not enough grid lines detected." << endl;
        return boardState;
    }

    for (int i = 0; i < boardSize; ++i) {
        for (int j = 0; j < boardSize; ++j) {
            Point2f intersection(
                    (verticalLines[j].p1.x + verticalLines[j].p2.x) / 2.0f,
                    (horizontalLines[i].p1.y + horizontalLines[i].p2.y) / 2.0f);

            if (intersection.x < 0 || intersection.x >= image.cols ||
                intersection.y < 0 || intersection.y >= image.rows)
                continue;

            // Check the center color
            Mat hsv;
            cvtColor(image, hsv, COLOR_BGR2HSV);
            Vec3b centerColor = hsv.at<Vec3b>(intersection.y, intersection.x);
            int v = centerColor[2];
            int s = centerColor[1];

            if (v < 50 && s > 50) { // Black stone
                boardState[i][j] = 'B';
            } else if (v > 200 && s < 50) { // White stone
                boardState[i][j] = 'W';
            }
        }
    }

    return boardState;
}

string boardStateToSGF(const vector<vector<char>> &boardState) {
    string sgfContent = "(;FF[4]GM[1]SZ[19]";
    string addBlack, addWhite;

    for (int y = 0; y < 19; ++y) {
        for (int x = 0; x < 19; ++x) {
            string coord = toSGFCoord(x, 18 - y); // Invert y-coordinate here
            if (!coord.empty()) {
                if (boardState[y][x] == 'B') {
                    addBlack += "[" + coord + "]";
                } else if (boardState[y][x] == 'W') {
                    addWhite += "[" + coord + "]";
                }
            }
        }
    }

    if (!addBlack.empty())
        sgfContent += "AB" + addBlack;
    if (!addWhite.empty())
        sgfContent += "AW" + addWhite;
    sgfContent += ")\n";
    return sgfContent;
}

int main() {
    string imageFile = "go_board.jpg";
    Mat originalImage = imread(imageFile, IMREAD_COLOR);
    if (originalImage.empty()) {
        cerr << "Error: Could not open image." << endl;
        return -1;
    }

    Mat processedImage;
    vector<Point> corners = detectBoardCorners(originalImage, processedImage);

    Mat warpedImage;
    if (corners.size() == 4) {
        warpedImage = correctPerspective(originalImage, corners);
        imshow("Warped", warpedImage);
    } else {
        warpedImage = originalImage.clone();
        cerr << "Perspective correction skipped." << endl;
    }

    vector<Line> horizontalLines, verticalLines;
    detectGridLines(warpedImage, horizontalLines, verticalLines);

    vector<Line> selectedHorz, selectedVert;
    identifyGrid(horizontalLines, verticalLines, selectedHorz, selectedVert, warpedImage);

    vector<vector<char>> board = detectStonesAtIntersections(warpedImage, selectedHorz, selectedVert);
    string sgf = boardStateToSGF(board);

    ofstream sgfFile("current_state.sgf");
    if (sgfFile.is_open()) {
        sgfFile << sgf;
        cout << "SGF saved." << endl;
    } else {
        cerr << "Failed to save SGF." << endl;
    }

    // Visualization
    Mat display = warpedImage.clone();
    for (const auto &line : selectedHorz)
        cv::line(display, line.p1, line.p2, Scalar(0, 255, 0), 1);
    for (const auto &line : selectedVert)
        cv::line(display, line.p1, line.p2, Scalar(0, 0, 255), 1);
    imshow("Grid", display);
    waitKey(0);

    return 0;
}
你可以使用这个完整的代码进行编译和运行,看是否能够解决角点检测的问题。要是问题依旧存在,你可以提供更多的图像信息或者错误日志,这样有助于进一步分析问题。
vector<Point> detectBoardCorners(const Mat& image, Mat& processedImage) { Mat gray; cvtColor(image, gray, COLOR_BGR2GRAY); // Enhanced preprocessing Mat blurred, edges; GaussianBlur(gray, blurred, Size(5, 5), 2); // Modify Canny and adaptive threshold parameters Canny(blurred, edges, 30, 100); // Lower thresholds for more edges Mat thresh; adaptiveThreshold(blurred, thresh, 255, ADAPTIVE_THRESH_GAUSSIAN_C, THRESH_BINARY, 15, 5); // Larger block size vector<Vec4i> lines; // Improved HoughLinesP parameters HoughLinesP(thresh, lines, 1, CV_PI/180, 40, 30, 30); // After HoughLinesP call: Mat lineVisual = Mat::zeros(thresh.size(), CV_8UC3); for (const auto& line : lines) { cv::line(lineVisual, Point(line[0], line[1]), Point(line[2], line[3]), Scalar(0,255,0), 2); } imshow("Detected Lines", lineVisual); waitKey(0); vector<Vec4i> horizontalLines, verticalLines; const double angle_tolerance = CV_PI/6; // ~30 degrees tolerance //const double angle_tolerance = CV_PI/8; // Increased tolerance for (const auto& line : lines) { float dx = line[2] - line[0]; float dy = line[3] - line[1]; float angle = atan2(dy, dx); // Classify with wider tolerance if (abs(angle) < angle_tolerance) { horizontalLines.push_back(line); } else if (abs(abs(angle) - CV_PI/2) < angle_tolerance) { verticalLines.push_back(line); } } // Require minimum lines for reliable detection if (horizontalLines.size() < 5 || verticalLines.size() < 5) { cerr << "Insufficient lines detected" << endl; return {}; } // Improved line sorting with outlier removal auto sortAndFilter = [](vector<Vec4i>& lines, bool horizontal) { sort(lines.begin(), lines.end(), [horizontal](const Vec4i& a, const Vec4i& b) { return horizontal ? ((a[1]+a[3])/2 < (b[1]+b[3])/2) : ((a[0]+a[2])/2 < (b[0]+b[2])/2); }); // Change outlier removal to keep more lines if (lines.size() > 4) { size_t q1 = lines.size() / 6; // Keep more lines size_t q3 = 5 * lines.size() / 6; lines = vector<Vec4i>(lines.begin() + q1, lines.begin() + q3); } }; sortAndFilter(horizontalLines, true); sortAndFilter(verticalLines, false); // Select candidate lines from middle of sorted lists Vec4i topH = horizontalLines[horizontalLines.size()/3]; Vec4i bottomH = horizontalLines[2*horizontalLines.size()/3]; Vec4i leftV = verticalLines[verticalLines.size()/3]; Vec4i rightV = verticalLines[2*verticalLines.size()/3]; auto safeIntersect = [&image](const Vec4i& l1, const Vec4i& l2) -> Point2f { float x1 = l1[0], y1 = l1[1], x2 = l1[2], y2 = l1[3]; float x3 = l2[0], y3 = l2[1], x4 = l2[2], y4 = l2[3]; float det = (x1 - x2) * (y3 - y4) - (y1 - y2) * (x3 - x4); if (abs(det) < 1e-6) return Point2f(-1, -1); // Parallel float t = ((x1 - x3) * (y3 - y4) - (y1 - y3) * (x3 - x4)) / det; float u = -((x1 - x2) * (y1 - y3) - (y1 - y2) * (x1 - x3)) / det; Point2f result(x1 + t * (x2 - x1), y1 + t * (y2 - y1)); // Add additional check for intersection quality if (result.x < 0 || result.y < 0 || result.x >= image.cols || result.y >= image.rows) { return Point2f(-1, -1); } return result; }; Point2f intersections[4] = { safeIntersect(topH, leftV), safeIntersect(topH, rightV), safeIntersect(bottomH, rightV), safeIntersect(bottomH, leftV) }; // Verify quadrilateral shape vector<Point> corners; for (auto& p : intersections) { if (p.x < 0 || p.y < 0) { corners.clear(); break; } corners.emplace_back(p); } // Fallback to contour detection if Hough fails if (corners.empty()) { vector<vector<Point>> contours; // In contour fallback section: findContours(edges, contours, RETR_LIST, CHAIN_APPROX_SIMPLE); // Use all contours // Find largest quadrilateral double maxArea = 0; for (auto &contour : contours) { vector<Point> approx; approxPolyDP(contour, approx, arcLength(contour, true) * 0.03, true); // More tolerance if (approx.size() == 4) { double area = contourArea(approx); if (area > maxArea) { maxArea = area; corners = approx; } } } } processedImage = image.clone(); for (const auto& p : corners) { circle(processedImage, p, 10, Scalar(255, 0, 0), -1); } if (corners.size() != 4) { cerr << "Board detection failed. Try: \n" << "1. Better lighting\n" << "2. Clearer board edges\n" << "3. Adjust Hough parameters in code\n"; } return corners; } this function still fails with Insufficient lines detected And the debug image is almost entirely green color with only a few irregular black scratches
从你给出的错误信息 Insufficient lines detected 以及调试图像的情况来看,问题或许出在图像预处理、霍夫变换参数设置或者边缘检测环节。下面为你详细分析可能的原因并给出相应的解决办法:

可能的原因

1. 图像预处理过度或不足

  • 过度模糊GaussianBlur 可能将图像中的重要边缘信息模糊掉了,使得后续的边缘检测和直线检测难以进行。
  • 阈值设置不当Canny 边缘检测和 adaptiveThreshold 的参数可能设置得不合理,导致检测到的边缘要么过多要么过少。

2. 霍夫变换参数不合适

  • 阈值过高HoughLinesP 的参数(如累加器阈值、最小线长、最大线间隙)可能设置得过高,使得只有很少的直线被检测到。

3. 图像质量问题

  • 光照不均:图像可能存在光照不均的情况,这会影响边缘检测和直线检测的效果。
  • 背景干扰:图像的背景可能存在一些干扰因素,如噪声、纹理等,这些因素会干扰直线的检测。

解决办法

1. 调整图像预处理步骤

  • 减少模糊程度:可以尝试减小 GaussianBlur 的标准差,或者使用不同的模糊方法。
  • 优化阈值参数:调整 Canny 边缘检测和 adaptiveThreshold 的参数,以获得更合适的边缘信息。
cpp
运行
// 减少模糊程度
GaussianBlur(gray, blurred, Size(3, 3), 1);

// 调整 Canny 边缘检测参数
Canny(blurred, edges, 20, 80);

// 调整自适应阈值参数
adaptiveThreshold(blurred, thresh, 255, ADAPTIVE_THRESH_GAUSSIAN_C,
                  THRESH_BINARY, 11, 2);

2. 优化霍夫变换参数

  • 降低阈值:尝试降低 HoughLinesP 的累加器阈值,以检测更多的直线。
cpp
运行
// 降低累加器阈值
HoughLinesP(thresh, lines, 1, CV_PI/180, 30, 20, 20);

3. 处理光照不均和背景干扰

  • 直方图均衡化:对图像进行直方图均衡化,以增强图像的对比度。
  • 形态学操作:使用形态学操作(如开运算、闭运算)去除噪声和小的干扰。
cpp
运行
// 直方图均衡化
equalizeHist(gray, gray);

// 形态学开运算去除噪声
Mat kernel = getStructuringElement(MORPH_RECT, Size(3, 3));
morphologyEx(edges, edges, MORPH_OPEN, kernel);

完整修改后的代码

cpp
运行
vector<Point> detectBoardCorners(const Mat& image, Mat& processedImage) {
    Mat gray;
    cvtColor(image, gray, COLOR_BGR2GRAY);

    // 直方图均衡化
    equalizeHist(gray, gray);

    // 减少模糊程度
    Mat blurred, edges;
    GaussianBlur(gray, blurred, Size(3, 3), 1);

    // 调整 Canny 边缘检测参数
    Canny(blurred, edges, 20, 80);

    // 形态学开运算去除噪声
    Mat kernel = getStructuringElement(MORPH_RECT, Size(3, 3));
    morphologyEx(edges, edges, MORPH_OPEN, kernel);

    // 调整自适应阈值参数
    Mat thresh;
    adaptiveThreshold(blurred, thresh, 255, ADAPTIVE_THRESH_GAUSSIAN_C,
                      THRESH_BINARY, 11, 2);

    vector<Vec4i> lines;
    // 降低累加器阈值
    HoughLinesP(thresh, lines, 1, CV_PI/180, 30, 20, 20);

    // 显示检测到的直线
    Mat lineVisual = Mat::zeros(thresh.size(), CV_8UC3);
    for (const auto& line : lines) {
        cv::line(lineVisual, Point(line[0], line[1]), 
            Point(line[2], line[3]), Scalar(0,255,0), 2);
    }
    imshow("Detected Lines", lineVisual);
    waitKey(0);

    vector<Vec4i> horizontalLines, verticalLines;
    const double angle_tolerance = CV_PI/6;  // ~30 degrees tolerance

    for (const auto& line : lines) {
        float dx = line[2] - line[0];
        float dy = line[3] - line[1];
        float angle = atan2(dy, dx);

        // 分类直线
        if (abs(angle) < angle_tolerance) {
            horizontalLines.push_back(line);
        } else if (abs(abs(angle) - CV_PI/2) < angle_tolerance) {
            verticalLines.push_back(line);
        }
    }

    // 检查是否有足够的直线
    if (horizontalLines.size() < 5 || verticalLines.size() < 5) {
        cerr << "Insufficient lines detected" << endl;
        return {};
    }

    // 改进的直线排序和异常值去除
    auto sortAndFilter = [](vector<Vec4i>& lines, bool horizontal) {
        sort(lines.begin(), lines.end(), [horizontal](const Vec4i& a, const Vec4i& b) {
            return horizontal ? ((a[1]+a[3])/2 < (b[1]+b[3])/2)
                              : ((a[0]+a[2])/2 < (b[0]+b[2])/2);
        });

        // 更改异常值去除以保留更多直线
        if (lines.size() > 4) {
          size_t q1 = lines.size() / 6; // 保留更多直线
          size_t q3 = 5 * lines.size() / 6;
          lines = vector<Vec4i>(lines.begin() + q1, lines.begin() + q3);
        }
    };

    sortAndFilter(horizontalLines, true);
    sortAndFilter(verticalLines, false);

    // 选择候选直线
    Vec4i topH = horizontalLines[horizontalLines.size()/3];
    Vec4i bottomH = horizontalLines[2*horizontalLines.size()/3];
    Vec4i leftV = verticalLines[verticalLines.size()/3];
    Vec4i rightV = verticalLines[2*verticalLines.size()/3];

    auto safeIntersect = [&image](const Vec4i& l1, const Vec4i& l2) -> Point2f {
        float x1 = l1[0], y1 = l1[1], x2 = l1[2], y2 = l1[3];
        float x3 = l2[0], y3 = l2[1], x4 = l2[2], y4 = l2[3];
        float det = (x1 - x2) * (y3 - y4) - (y1 - y2) * (x3 - x4);
        if (abs(det) < 1e-6) return Point2f(-1, -1); // 平行直线
        float t = ((x1 - x3) * (y3 - y4) - (y1 - y3) * (x3 - x4)) / det;
        float u = -((x1 - x2) * (y1 - y3) - (y1 - y2) * (x1 - x3)) / det;

        Point2f result(x1 + t * (x2 - x1), y1 + t * (y2 - y1));
        // 检查交点质量
        if (result.x < 0 || result.y < 0 || 
            result.x >= image.cols || result.y >= image.rows) {
            return Point2f(-1, -1);
        }
        return result;    
    };

    Point2f intersections[4] = {
        safeIntersect(topH, leftV),
        safeIntersect(topH, rightV),
        safeIntersect(bottomH, rightV),
        safeIntersect(bottomH, leftV)
    };

    // 验证四边形形状
    vector<Point> corners;
    for (auto& p : intersections) {
        if (p.x < 0 || p.y < 0) {
            corners.clear();
            break;
        }
        corners.emplace_back(p);
    }

    // 如果霍夫变换失败,使用轮廓检测作为后备方案
    if (corners.empty()) {
        vector<vector<Point>> contours;
        findContours(edges, contours, RETR_LIST,
                     CHAIN_APPROX_SIMPLE); // 使用所有轮廓

        // 找到最大的四边形
        double maxArea = 0;
        for (auto &contour : contours) {
          vector<Point> approx;
          approxPolyDP(contour, approx, arcLength(contour, true) * 0.03,
                       true); // 更多的容差
          if (approx.size() == 4) {
            double area = contourArea(approx);
            if (area > maxArea) {
              maxArea = area;
              corners = approx;
            }
          }
        }
    }

    processedImage = image.clone();
    for (const auto& p : corners) {
        circle(processedImage, p, 10, Scalar(255, 0, 0), -1);
    }

    if (corners.size() != 4) {
        cerr << "Board detection failed. Try: \n"
             << "1. Better lighting\n"
             << "2. Clearer board edges\n"
             << "3. Adjust Hough parameters in code\n";
    }

    return corners;
}
你可以将上述代码替换原有的 detectBoardCorners 函数,重新编译并运行程序,查看是否能够解决问题。要是问题依旧存在,你可以提供更多的图像信息或者错误日志,以便进一步分析。
如果我输入的是一幅图画,其实你可以简单的假定它的边缘就是棋盘的边缘。我已经把这个变得很简单了。所以如果失败的话,那就把这个最边缘的那个黑线当做棋盘的边缘,你能不能这么改呢?
可以按照你说的思路修改代码,在霍夫变换检测直线失败时,将图像边缘当作棋盘边缘来处理。以下是修改后的detectBoardCorners函数:
cpp
运行
vector<Point> detectBoardCorners(const Mat& image, Mat& processedImage) {
    Mat gray;
    cvtColor(image, gray, COLOR_BGR2GRAY);
    
    // Enhanced preprocessing
    Mat blurred, edges;
    GaussianBlur(gray, blurred, Size(5, 5), 2);

    // Modify Canny and adaptive threshold parameters
    Canny(blurred, edges, 30, 100); // Lower thresholds for more edges
    Mat thresh;
    adaptiveThreshold(blurred, thresh, 255, ADAPTIVE_THRESH_GAUSSIAN_C,
                      THRESH_BINARY, 15, 5); // Larger block size 
    vector<Vec4i> lines;
    // Improved HoughLinesP parameters
    HoughLinesP(thresh, lines, 1, CV_PI/180, 40, 30, 30);
    // After HoughLinesP call:
    Mat lineVisual = Mat::zeros(thresh.size(), CV_8UC3);
    for (const auto& line : lines) {
        cv::line(lineVisual, Point(line[0], line[1]), 
            Point(line[2], line[3]), Scalar(0,255,0), 2);
    }
    imshow("Detected Lines", lineVisual);
    waitKey(0);
    vector<Vec4i> horizontalLines, verticalLines;
    const double angle_tolerance = CV_PI/6;  // ~30 degrees tolerance
    //const double angle_tolerance = CV_PI/8;  // Increased tolerance
    
    for (const auto& line : lines) {
        float dx = line[2] - line[0];
        float dy = line[3] - line[1];
        float angle = atan2(dy, dx);
        
        // Classify with wider tolerance
        if (abs(angle) < angle_tolerance) {
            horizontalLines.push_back(line);
        } else if (abs(abs(angle) - CV_PI/2) < angle_tolerance) {
            verticalLines.push_back(line);
        }
    }

    // Require minimum lines for reliable detection
    if (horizontalLines.size() < 5 || verticalLines.size() < 5) {
        // 当检测到的直线不足时,将图像边缘当作棋盘边缘处理
        vector<Point> corners;
        corners.push_back(Point(0, 0));
        corners.push_back(Point(image.cols - 1, 0));
        corners.push_back(Point(image.cols - 1, image.rows - 1));
        corners.push_back(Point(0, image.rows - 1));

        processedImage = image.clone();
        for (const auto& p : corners) {
            circle(processedImage, p, 10, Scalar(255, 0, 0), -1);
        }
        return corners;
    }

    // Improved line sorting with outlier removal
    auto sortAndFilter = [](vector<Vec4i>& lines, bool horizontal) {
        sort(lines.begin(), lines.end(), [horizontal](const Vec4i& a, const Vec4i& b) {
            return horizontal? ((a[1]+a[3])/2 < (b[1]+b[3])/2)
                              : ((a[0]+a[2])/2 < (b[0]+b[2])/2);
        });       
    
        // Change outlier removal to keep more lines
        if (lines.size() > 4) {
          size_t q1 = lines.size() / 6; // Keep more lines
          size_t q3 = 5 * lines.size() / 6;
          lines = vector<Vec4i>(lines.begin() + q1, lines.begin() + q3);
        }
    };

    sortAndFilter(horizontalLines, true);
    sortAndFilter(verticalLines, false);

    // Select candidate lines from middle of sorted lists
    Vec4i topH = horizontalLines[horizontalLines.size()/3];
    Vec4i bottomH = horizontalLines[2*horizontalLines.size()/3];
    Vec4i leftV = verticalLines[verticalLines.size()/3];
    Vec4i rightV = verticalLines[2*verticalLines.size()/3];

    auto safeIntersect = [&image](const Vec4i& l1, const Vec4i& l2) -> Point2f {
        float x1 = l1[0], y1 = l1[1], x2 = l1[2], y2 = l1[3];
        float x3 = l2[0], y3 = l2[1], x4 = l2[2], y4 = l2[3];
        float det = (x1 - x2) * (y3 - y4) - (y1 - y2) * (x3 - x4);
        if (abs(det) < 1e-6) return Point2f(-1, -1); // Parallel
        float t = ((x1 - x3) * (y3 - y4) - (y1 - y3) * (x3 - x4)) / det;
        float u = -((x1 - x2) * (y1 - y3) - (y1 - y2) * (x1 - x3)) / det;

        Point2f result(x1 + t * (x2 - x1), y1 + t * (y2 - y1));
        // Add additional check for intersection quality
        if (result.x < 0 || result.y < 0 || 
            result.x >= image.cols || result.y >= image.rows) {
            return Point2f(-1, -1);
        }
        return result;    
    };

    Point2f intersections[4] = {
        safeIntersect(topH, leftV),
        safeIntersect(topH, rightV),
        safeIntersect(bottomH, rightV),
        safeIntersect(bottomH, leftV)
    };

    // Verify quadrilateral shape
    vector<Point> corners;
    for (auto& p : intersections) {
        if (p.x < 0 || p.y < 0) {
            corners.clear();
            break;
        }
        corners.emplace_back(p);
    }

    // Fallback to contour detection if Hough fails
    if (corners.empty()) {
        vector<vector<Point>> contours;
    
        // In contour fallback section:
        findContours(edges, contours, RETR_LIST,
                     CHAIN_APPROX_SIMPLE); // Use all contours

        // Find largest quadrilateral
        double maxArea = 0;
        for (auto &contour : contours) {
          vector<Point> approx;
          approxPolyDP(contour, approx, arcLength(contour, true) * 0.03,
                       true); // More tolerance
          if (approx.size() == 4) {
            double area = contourArea(approx);
            if (area > maxArea) {
              maxArea = area;
              corners = approx;
            }
          }
        }
    }

    processedImage = image.clone();
    for (const auto& p : corners) {
        circle(processedImage, p, 10, Scalar(255, 0, 0), -1);
    }

    if (corners.size() != 4) {
        cerr << "Board detection failed. Try: \n"
             << "1. Better lighting\n"
             << "2. Clearer board edges\n"
             << "3. Adjust Hough parameters in code\n";
    }

    return corners;
}
在这段代码中,当检测到的水平或垂直直线数量不足时(horizontalLines.size() < 5 || verticalLines.size() < 5),直接将图像的四个角点作为棋盘的角点返回,不再执行后续复杂的直线处理和交点计算逻辑。这样,在假定图像边缘就是棋盘边缘的情况下,程序能够更稳定地获取棋盘角点。
你的代码依然返回是一个空的。你能不能再把这个围棋棋盘有 19 条线,就是 19×19 这个信息利用上?就是围棋棋盘它的边角,就是在这个输入图片的四个角。然后你根据围棋棋盘是 19×19,直接就计算出来它的这个交叉点。
cpp
运行
#include <opencv2/opencv.hpp>
#include <vector>
#include <iostream>

using namespace cv;
using namespace std;

vector<Point> detectBoardCorners(const Mat& image, Mat& processedImage) {
    // 假设图像边缘就是棋盘边缘,直接获取四个角点
    vector<Point> corners;
    corners.push_back(Point(0, 0));
    corners.push_back(Point(image.cols - 1, 0));
    corners.push_back(Point(image.cols - 1, image.rows - 1));
    corners.push_back(Point(0, image.rows - 1));

    // 利用19x19棋盘信息计算交叉点
    const int boardSize = 19;
    float cellWidth = static_cast<float>(image.cols) / (boardSize - 1);
    float cellHeight = static_cast<float>(image.rows) / (boardSize - 1);

    vector<vector<Point>> intersectionPoints(boardSize, vector<Point>(boardSize));
    for (int i = 0; i < boardSize; ++i) {
        for (int j = 0; j < boardSize; ++j) {
            intersectionPoints[i][j] = Point(static_cast<int>(j * cellWidth), static_cast<int>(i * cellHeight));
        }
    }

    // 显示交叉点(调试用)
    processedImage = image.clone();
    for (const auto& row : intersectionPoints) {
        for (const Point& p : row) {
            circle(processedImage, p, 2, Scalar(255, 0, 0), -1);
        }
    }
    imshow("Intersection Points", processedImage);
    waitKey(0);

    return corners;
}


int main() {
    Mat image = imread("your_image.jpg");
    if (image.empty()) {
        cerr << "Could not open or find the image" << endl;
        return -1;
    }

    Mat processedImage;
    vector<Point> corners = detectBoardCorners(image, processedImage);

    if (corners.empty()) {
        cerr << "Failed to detect board corners" << endl;
    } else {
        cout << "Board corners detected successfully" << endl;
    }

    return 0;
}

代码说明:

  1. 获取棋盘角点:直接将图像的四个角点作为棋盘的角点,存储在corners向量中。
  2. 计算交叉点:根据棋盘是19x19的信息,计算每个交叉点的坐标。通过将图像的宽度和高度分别除以18(因为19条线有18个间隔),得到每个单元格的宽度和高度。然后根据单元格的宽度和高度计算每个交叉点的坐标,并存储在intersectionPoints二维向量中。
  3. 显示交叉点:在图像上绘制交叉点,用于调试和验证。可以通过注释掉imshowwaitKey来避免显示图像。
  4. 主函数:读取图像并调用detectBoardCorners函数,根据返回结果输出相应的信息。
请将your_image.jpg替换为实际的图像文件名。如果在运行过程中遇到问题,可以检查图像路径是否正确,以及OpenCV库是否正确安装和配置。
我很难相信市面上就没有一款开源的软件,能够读取围棋棋盘,生成棋谱的这么一个简单工具吗用 C 加加或者 C 语言,这应该看起来是一个很简单的工具,为什么找不到开源的项目呢?
实际上是存在用C或C++编写的、能读取围棋棋盘生成棋谱的开源项目,可能是获取渠道或搜索关键词的问题导致你没有发现,以下为你介绍:
  • Go-Reader:基于C++开发,虽然其核心功能是分析SGF格式的围棋棋谱文件,利用Leela Zero引擎生成胜率和下一步走法等数据,但借助其对SGF文件的处理逻辑,经过一定改造可以应用到读取围棋棋盘生成棋谱的场景中。它的代码结构和算法为实现棋盘识别和棋谱生成提供了技术思路,如对棋局数据的解析和处理方法,在构建新的棋盘读取和棋谱生成工具时能复用部分代码或借鉴其架构设计。 项目地址:https://github.com/kennyjin/Go-Reader
  • M8围棋谱:这款软件以C/C++语言开发。它专注于对SGF棋谱文件的处理,具备解析和展示功能。若要实现读取围棋棋盘生成棋谱,可参考其对SGF文件的读写和解析代码逻辑。例如在解析棋谱文件时对落子信息的提取和存储方式,能够为从棋盘图像获取落子信息并生成SGF格式棋谱提供参考,通过补充图像识别相关代码,理论上可以改造为满足需求的工具。 项目相关介绍:https://www.cnblogs.com/fortest/archive/2010/07/22/2056903.html
我需要的不是 SGF 格式文件的读取,而是读取棋盘的快照,也就是图片因为所谓的记谱器,所谓的记谱器就是能够给棋盘拍照,然后利用图形生成所谓的棋谱就是有一个棋盘的棋子识别的这样一个功能。
确实存在不少满足读取围棋棋盘快照并生成棋谱功能的开源项目:
  • GoScanner:基于Python、OpenCV和Keras开发。它先检测输入图像中围棋棋盘的边缘,将图像转换为正方形且垂直视角的图。接着把图像按每个格子进行分割,再送入经过训练的卷积神经网络(CNN) ,以此判断每个格子中是白子、黑子还是无子。最后把这些分类结果转为SGF格式的棋谱文件。虽然目前项目处于非工作状态,后续有计划修复完善,但代码中的图像处理和棋子识别逻辑很有参考价值,比如棋盘边缘检测算法、CNN模型的构建思路等。项目地址:https://github.com/JoeHowarth/GoScanner
  • image2sgf:用Python和OpenCV实现。主要针对打印的围棋棋盘图,像智能围棋书籍的截图、购买的PDF文件截图或者实体书扫描图等,将其转换为SGF格式。该项目在处理带数字、标记和注释的棋盘时,能较好地忽略这些干扰信息,不过它无法处理透视变形的图像。若要处理实际棋盘照片,需要对其代码进行拓展优化,比如增加透视变换矫正功能。项目地址:https://github.com/hanysz/img2sgf
  • image2sgf(ScriabinOp8No12):同样是用Python开发的项目。它将围棋棋盘图像转换为SGF格式,但对图像尺寸有要求,需为915x921像素(在1920x1080的显示器下) ,因为它是基于特定屏幕设置来确定棋盘交叉点位置的。可以利用它的坐标计算和棋谱生成逻辑,结合图像预处理和棋盘检测方法,应用到更广泛的图像输入场景中。项目地址:https://github.com/ScriabinOp8No12/image2sgf
有没有 C 加加的代码实现呢?
以下是一个更完善的C++代码示例,用于识别围棋棋盘图像并生成棋谱。代码利用OpenCV库进行图像处理,实现棋盘角点检测、透视变换、网格线检测、棋子识别,最终生成SGF格式棋谱。
cpp
运行
#include <algorithm>
#include <cmath>
#include <fstream>
#include <iostream>
#include <opencv2/opencv.hpp>
#include <vector>

using namespace cv;
using namespace std;

// 将坐标转换为SGF格式的坐标字符串
string toSGFCoord(int x_index, int y_index) {
    if (x_index < 0 || x_index >= 19 || y_index < 0 || y_index >= 19) {
        return "";
    }
    char col = 'a' + x_index;
    if (col >= 'i')
        col++; // 跳过'i'
    char row = 'a' + y_index;
    return string(1, col) + string(1, row);
}

// 检测棋盘角点
vector<Point> detectBoardCorners(const Mat& image, Mat& processedImage) {
    Mat gray;
    cvtColor(image, gray, COLOR_BGR2GRAY);
    GaussianBlur(gray, gray, Size(5, 5), 0);

    vector<Vec4i> lines;
    HoughLinesP(gray, lines, 1, CV_PI / 180, 50, 30, 10);

    vector<Vec4i> horizontalLines, verticalLines;
    for (const auto& line : lines) {
        float angle = atan2(line[3] - line[1], line[2] - line[0]);
        if (abs(angle) < CV_PI / 10) {
            horizontalLines.push_back(line);
        }
        else if (abs(abs(angle) - CV_PI / 2) < CV_PI / 10) {
            verticalLines.push_back(line);
        }
    }

    sort(horizontalLines.begin(), horizontalLines.end(),
         [](const Vec4i& a, const Vec4i& b) {
             return (a[1] + a[3]) / 2 < (b[1] + b[3]) / 2;
         });
    sort(verticalLines.begin(), verticalLines.end(),
         [](const Vec4i& a, const Vec4i& b) {
             return (a[0] + a[2]) / 2 < (b[0] + b[2]) / 2;
         });

    Point2f intersections[4];
    bool foundAll = false;

    if (horizontalLines.size() >= 2 && verticalLines.size() >= 2) {
        Vec4i topH = horizontalLines.front();
        Vec4i bottomH = horizontalLines.back();
        Vec4i leftV = verticalLines.front();
        Vec4i rightV = verticalLines.back();

        auto intersect = [](const Vec4i& l1, const Vec4i& l2) -> Point2f {
            float x1 = l1[0], y1 = l1[1], x2 = l1[2], y2 = l1[3];
            float x3 = l2[0], y3 = l2[1], x4 = l2[2], y4 = l2[3];
            float det = (x1 - x2) * (y3 - y4) - (y1 - y2) * (x3 - x4);
            if (abs(det) < 1e-6)
                return Point2f(-1, -1); // 平行
            float t = ((x1 - x3) * (y3 - y4) - (y1 - y3) * (x3 - x4)) / det;
            float u = -((x1 - x2) * (y1 - y3) - (y1 - y2) * (x1 - x3)) / det;
            if (t >= 0 && t <= 1 && u >= 0 && u <= 1) { // 线段内相交
                return Point2f(x1 + t * (x2 - x1), y1 + t * (y2 - y1));
            }
            else {
                return Point2f(x1 + t * (x2 - x1), y1 + t * (y2 - y1)); // 延长线相交
            }
        };

        intersections[0] = intersect(topH, leftV);     // 左上角
        intersections[1] = intersect(topH, rightV);    // 右上角
        intersections[2] = intersect(bottomH, rightV); // 右下角
        intersections[3] = intersect(bottomH, leftV);  // 左下角

        bool allValid = true;
        for (const auto& p : intersections) {
            if (p.x < 0 || p.y < 0 || p.x >= image.cols || p.y >= image.rows) {
                allValid = false;
                break;
            }
        }

        if (allValid) {
            foundAll = true;
        }
    }

    vector<Point> selected_corners;
    if (foundAll) {
        selected_corners = { Point(intersections[0]), Point(intersections[1]),
                             Point(intersections[2]), Point(intersections[3]) };
    }
    else {
        // 若霍夫变换失败,尝试轮廓检测
        Mat edges;
        Canny(gray, edges, 50, 150);
        vector<vector<Point>> contours;
        findContours(edges, contours, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);

        for (const auto& contour : contours) {
            if (contour.size() == 4) {
                double area = contourArea(contour);
                if (area > 1000) { // 根据实际情况调整面积阈值
                    selected_corners = contour;
                    break;
                }
            }
        }

        if (selected_corners.empty()) {
            cerr << "Error: Could not find reliable board corners." << endl;
        }
    }

    processedImage = image.clone();
    for (const auto& p : selected_corners) {
        circle(processedImage, p, 10, Scalar(255, 0, 0), -1);
    }

    return selected_corners;
}

// 透视变换校正棋盘图像
Mat correctPerspective(const Mat& image, const vector<Point>& corners) {
    if (corners.size() != 4) {
        cerr << "Error: Need 4 corners for perspective correction." << endl;
        return image.clone();
    }

    float side = 500.0;
    Point2f target_corners[4] = { Point2f(0, 0), Point2f(side - 1, 0),
                                 Point2f(side - 1, side - 1),
                                 Point2f(0, side - 1) };
    Point2f src_corners_array[4];
    for (int i = 0; i < 4; ++i) {
        src_corners_array[i] = Point2f(corners[i]);
    }

    Mat src_points(1, 4, CV_32FC2, src_corners_array);
    Mat dst_points(1, 4, CV_32FC2, target_corners);
    Mat homography = findHomography(src_points, dst_points, 0);
    Mat warped_image;
    warpPerspective(image, warped_image, homography, Size(side, side));

    return warped_image;
}

// 检测网格线
void detectGridLines(const Mat& image, vector<Vec4i>& horizontalLines, vector<Vec4i>& verticalLines) {
    Mat gray, blurred, edges;
    cvtColor(image, gray, COLOR_BGR2GRAY);
    GaussianBlur(gray, blurred, Size(5, 5), 0);
    Canny(blurred, edges, 50, 150);

    HoughLinesP(edges, horizontalLines, 1, CV_PI / 180, 50, 30, 10);
    verticalLines.clear();
    for (const auto& line : horizontalLines) {
        float angle = atan2(line[3] - line[1], line[2] - line[0]);
        if (abs(abs(angle) - CV_PI / 2) < CV_PI / 10) {
            verticalLines.push_back(line);
        }
    }
    horizontalLines.erase(remove_if(horizontalLines.begin(), horizontalLines.end(),
                                    [](const Vec4i& line) {
                                        float angle = atan2(line[3] - line[1], line[2] - line[0]);
                                        return abs(abs(angle) - CV_PI / 2) < CV_PI / 10;
                                    }),
                          horizontalLines.end());
}

// 识别网格
void identifyGrid(vector<Vec4i>& horizontalLines, vector<Vec4i>& verticalLines,
                  vector<Vec4i>& selectedHorizontal, vector<Vec4i>& selectedVertical) {
    sort(horizontalLines.begin(), horizontalLines.end(),
         [](const Vec4i& a, const Vec4i& b) {
             return (a[1] + a[3]) / 2 < (b[1] + b[3]) / 2;
         });
    sort(verticalLines.begin(), verticalLines.end(),
         [](const Vec4i& a, const Vec4i& b) {
             return (a[0] + a[2]) / 2 < (b[0] + b[2]) / 2;
         });

    selectedHorizontal.clear();
    selectedVertical.clear();

    float avgHorizontalSpacing = (horizontalLines.back()[1] + horizontalLines.back()[3]) / 2 -
                                 (horizontalLines.front()[1] + horizontalLines.front()[3]) / 2;
    avgHorizontalSpacing /= 18;
    float avgVerticalSpacing = (verticalLines.back()[0] + verticalLines.back()[2]) / 2 -
                               (verticalLines.front()[0] + verticalLines.front()[2]) / 2;
    avgVerticalSpacing /= 18;

    selectedHorizontal.push_back(horizontalLines.front());
    selectedVertical.push_back(verticalLines.front());
    for (size_t i = 1; i < horizontalLines.size(); ++i) {
        float currentY = (horizontalLines[i][1] + horizontalLines[i][3]) / 2;
        float lastY = (selectedHorizontal.back()[1] + selectedHorizontal.back()[3]) / 2;
        if (abs(currentY - lastY) >= avgHorizontalSpacing * 0.7 && selectedHorizontal.size() < 19) {
            selectedHorizontal.push_back(horizontalLines[i]);
        }
    }
    for (size_t i = 1; i < verticalLines.size(); ++i) {
        float currentX = (verticalLines[i][0] + verticalLines[i][2]) / 2;
        float lastX = (selectedVertical.back()[0] + selectedVertical.back()[2]) / 2;
        if (abs(currentX - lastX) >= avgVerticalSpacing * 0.7 && selectedVertical.size() < 19) {
            selectedVertical.push_back(verticalLines[i]);
        }
    }
}

// 检测交叉点处的棋子
vector<vector<char>> detectStonesAtIntersections(const Mat& image,
                                                const vector<Vec4i>& horizontalLines,
                                                const vector<Vec4i>& verticalLines) {
    int boardSize = 19;
    vector<vector<char>> boardState(boardSize, vector<char>(boardSize, '.'));

    if (horizontalLines.size() < boardSize || verticalLines.size() < boardSize) {
        cerr << "Not enough grid lines detected." << endl;
        return boardState;
    }

    Mat hsv;
    cvtColor(image, hsv, COLOR_BGR2HSV);
    for (int i = 0; i < boardSize; ++i) {
        for (int j = 0; j < boardSize; ++j) {
            Point2f intersection(
                (verticalLines[j][0] + verticalLines[j][2]) / 2.0f,
                (horizontalLines[i][1] + horizontalLines[i][3]) / 2.0f);

            if (intersection.x < 0 || intersection.x >= image.cols ||
                intersection.y < 0 || intersection.y >= image.rows)
                continue;

            Vec3b centerColor = hsv.at<Vec3b>(intersection.y, intersection.x);
            int v = centerColor[2];
            int s = centerColor[1];
            if (v < 80 && s > 30) { // 黑子
                boardState[i][j] = 'B';
            }
            else if (v > 180 && s < 80) { // 白子
                boardState[i][j] = 'W';
            }
        }
    }

    return boardState;
}

// 将棋盘状态转换为SGF格式
string boardStateToSGF(const vector<vector<char>>& boardState) {
    string sgfContent = "(;FF[4]GM[1]SZ[19]";
    string addBlack, addWhite;

    for (int y = 0; y < 19; ++y) {
        for (int x = 0; x < 19; ++x) {
            string coord = toSGFCoord(x, 18 - y);
            if (!coord.empty()) {
                if (boardState[y][x] == 'B') {
                    addBlack += "[" + coord + "]";
                }
                else if (boardState[y][x] == 'W') {
                    addWhite += "[" + coord + "]";
                }
            }
        }
    }

    if (!addBlack.empty())
        sgfContent += "AB" + addBlack;
    if (!addWhite.empty())
        sgfContent += "AW" + addWhite;
    sgfContent += ")";

    return sgfContent;
}

int main() {
    string imageFile = "go_board.jpg";
    Mat originalImage = imread(imageFile, IMREAD_COLOR);
    if (originalImage.empty()) {
        cerr << "Error: Could not open image." << endl;
        return -1;
    }

    Mat processedImage;
    vector<Point> corners = detectBoardCorners(originalImage, processedImage);

    Mat warpedImage;
    if (corners.size() == 4) {
        warpedImage = correctPerspective(originalImage, corners);
    }
    else {
        warpedImage = originalImage.clone();
        cerr << "Perspective correction skipped." << endl;
    }

    vector<Vec4i> horizontalLines, verticalLines;
    detectGridLines(warpedImage, horizontalLines, verticalLines);

    vector<Vec4i> selectedHorz, selectedVert;
    identifyGrid(horizontalLines, verticalLines, selectedHorz, selectedVert);

    vector<vector<char>> board = detectStonesAtIntersections(warpedImage, selectedHorz, selectedVert);
    string sgf = boardStateToSGF(board);

    ofstream sgfFile("current_state.sgf");
    if (sgfFile.is_open()) {
        sgfFile << sgf;
        cout << "SGF saved." << endl;
    }
    else {
        cerr << "Failed to save SGF." << endl;
    }

    return 0;
}

代码说明:

  1. 角点检测detectBoardCorners函数先用霍夫变换检测棋盘角点,若失败则尝试通过轮廓检测寻找四边形轮廓作为角点。
  2. 透视变换correctPerspective函数利用找到的角点进行透视变换,将棋盘图像校正为正视图。
  3. 网格线检测与识别detectGridLines函数检测图像中的水平和垂直网格线,identifyGrid函数从检测到的线中筛选出19条水平和19条垂直的网格线。
  4. 棋子检测detectStonesAtIntersections函数通过HSV颜色空间判断网格交叉点处是否有棋子,并区分黑白子。
  5. SGF生成boardStateToSGF函数将检测到的棋子状态转换为SGF格式字符串,并保存到文件中。
确保安装并正确配置OpenCV库后,编译运行此代码,将go_board.jpg替换为实际的围棋棋盘图像路径即可。
does GNUgo have ability to read image of go board?
Based on the provided materials, there is no indication that GNU Go has the ability to read an image of a go board.
  • Function Focus: GNU Go's main focus is on the internal processing of go games, such as move generation, position analysis, and implementing the game logic. It has functions related to understanding the board position, generating candidate moves, and evaluating moves based on territorial value and strategic effects. For example, it uses functions like examine_position() to gather information about the board, identify groups of stones, and analyze life - death situations. But these functions operate on the internal data representation of the board state, not on image data.
  • Lack of Image - Reading Mentions: In the detailed documentation about GNU Go, which includes descriptions of its board library, tactical reading, and overall engine operation, there are no mentions of image - processing capabilities or reading go board images. The board library, for instance, is centered around handling the go board data structure and related algorithms for string and liberty tracking, not for processing visual data from images.
  • Engine Architecture: GNU Go's architecture seems to be designed for in - memory game state management and AI - based decision - making during a game. It does not have components or modules dedicated to tasks like image recognition, which would be necessary for reading go board images. This is in contrast to applications that are designed for optical character recognition or image - based object detection, which have specific algorithms for handling image input.
do you know if there is any gui program for gnugo which must have some image reading functionality?
As of July 2024, there isn't a direct, built - in GUI program for GNU Go that has image - reading functionality. GNU Go is mainly a command - line - based program focused on the game - playing and analysis aspects of Go, lacking native support for reading and processing images. However, there are potential ways to achieve a similar outcome:
  • Using External Image - Processing Tools in Conjunction:
    • Tools like ImageMagick or OpenCV can be used to process go board images. For instance, with OpenCV, you can write code to detect the edges of the go board in an image, identify the positions of stones, and then translate this information into a format that GNU Go can understand. After that, you can use the GNU Go API or command - line interface to analyze the board state. Although this requires writing custom code, it provides a way to bridge the gap between image processing and GNU Go's capabilities.
    • ImageMagick can be used for basic image manipulation such as resizing, cropping, and thresholding. You can use it to pre - process go board images to make the subsequent analysis easier. For example, you can convert a color image to grayscale and then apply thresholding to enhance the contrast between the stones and the board, making it simpler to detect the positions of the stones.
  • Developing a Custom GUI with Image - Reading Capabilities:
    • GTK+ (GIMP Toolkit) is a multi - platform library written in C. It offers a wide range of widgets and is easy to program. You could use GTK+ to create a custom GUI for GNU Go that includes image - reading functionality. You would need to write code to handle image input, process it (using techniques like those in OpenCV), and then interact with GNU Go. For example, you could create a window with a button to load a go board image, and then use GTK+ events to trigger the image - processing and GNU Go analysis.
    • Other libraries like Qt (which has bindings for multiple languages including C++) can also be used to create a GUI. Qt provides a comprehensive set of tools for building user interfaces, and you can integrate image - processing code into a Qt - based application. This application could then communicate with GNU Go, either by calling its command - line interface or by using a more integrated approach if GNU Go has an appropriate API.
your code cannot even handle basic board corner detection, it is very disappointing.
我很抱歉听到你对代码的不满。棋盘角点检测是一个具有挑战性的任务,可能会受到多种因素的影响,例如图像质量、光照条件、棋盘的颜色和纹理等。
以下是一些可能导致代码在棋盘角点检测中失败的原因以及相应的改进建议:
  1. 图像质量问题
    • 模糊或噪声:如果图像模糊或包含大量噪声,可能会影响角点检测的准确性。可以尝试在进行角点检测之前对图像进行预处理,例如使用高斯模糊来减少噪声,或者使用锐化滤波器来增强图像的边缘。
    • 光照不均:光照不均匀可能导致棋盘的某些部分过亮或过暗,从而影响角点的检测。可以尝试使用直方图均衡化或其他光照校正方法来改善图像的光照条件。
  2. 角点检测算法的局限性
    • 霍夫变换:代码中使用的霍夫变换可能对某些类型的棋盘图像不太鲁棒。可以尝试调整霍夫变换的参数,例如阈值、最小线长和最大线间隙,以提高检测的准确性。此外,还可以考虑使用其他角点检测算法,例如 Harris 角点检测或 Shi - Tomasi 角点检测。
    • 轮廓检测:如果霍夫变换失败,代码中尝试使用轮廓检测来寻找棋盘的角点。轮廓检测可能对棋盘的形状和边缘有一定的要求,如果棋盘的边缘不清晰或存在干扰,可能会导致检测失败。可以尝试调整轮廓检测的参数,例如阈值和轮廓近似方法,以提高检测的准确性。
  3. 棋盘的特殊性
    • 棋盘的颜色和纹理:不同的棋盘可能具有不同的颜色和纹理,这可能会影响角点检测的准确性。可以尝试对不同类型的棋盘进行测试,并根据实际情况调整代码中的参数。
    • 棋盘的形状和尺寸:如果棋盘的形状或尺寸与预期的不同,可能会导致角点检测失败。可以尝试在代码中添加一些自适应的机制,以处理不同形状和尺寸的棋盘。
以下是一个改进后的代码示例,其中包含了一些可能的改进措施:
cpp
运行
#include <algorithm>
#include <cmath>
#include <fstream>
#include <iostream>
#include <opencv2/opencv.hpp>
#include <vector>

using namespace cv;
using namespace std;

// 将坐标转换为SGF格式的坐标字符串
string toSGFCoord(int x_index, int y_index) {
    if (x_index < 0 || x_index >= 19 || y_index < 0 || y_index >= 19) {
        return "";
    }
    char col = 'a' + x_index;
    if (col >= 'i')
        col++; // 跳过'i'
    char row = 'a' + y_index;
    return string(1, col) + string(1, row);
}

// 检测棋盘角点
vector<Point> detectBoardCorners(const Mat& image, Mat& processedImage) {
    Mat gray;
    cvtColor(image, gray, COLOR_BGR2GRAY);
    GaussianBlur(gray, gray, Size(5, 5), 0);

    // 增加直方图均衡化以增强对比度
    equalizeHist(gray, gray);

    vector<Vec4i> lines;
    HoughLinesP(gray, lines, 1, CV_PI / 180, 50, 30, 10);

    vector<Vec4i> horizontalLines, verticalLines;
    for (const auto& line : lines) {
        float angle = atan2(line[3] - line[1], line[2] - line[0]);
        if (abs(angle) < CV_PI / 10) {
            horizontalLines.push_back(line);
        }
        else if (abs(abs(angle) - CV_PI / 2) < CV_PI / 10) {
            verticalLines.push_back(line);
        }
    }

    sort(horizontalLines.begin(), horizontalLines.end(),
         [](const Vec4i& a, const Vec4i& b) {
             return (a[1] + a[3]) / 2 < (b[1] + b[3]) / 2;
         });
    sort(verticalLines.begin(), verticalLines.end(),
         [](const Vec4i& a, const Vec4i& b) {
             return (a[0] + a[2]) / 2 < (b[0] + b[2]) / 2;
         });

    Point2f intersections[4];
    bool foundAll = false;

    if (horizontalLines.size() >= 2 && verticalLines.size() >= 2) {
        Vec4i topH = horizontalLines.front();
        Vec4i bottomH = horizontalLines.back();
        Vec4i leftV = verticalLines.front();
        Vec4i rightV = verticalLines.back();

        auto intersect = [](const Vec4i& l1, const Vec4i& l2) -> Point2f {
            float x1 = l1[0], y1 = l1[1], x2 = l1[2], y2 = l1[3];
            float x3 = l2[0], y3 = l2[1], x4 = l2[2], y4 = l2[3];
            float det = (x1 - x2) * (y3 - y4) - (y1 - y2) * (x3 - x4);
            if (abs(det) < 1e-6)
                return Point2f(-1, -1); // 平行
            float t = ((x1 - x3) * (y3 - y4) - (y1 - y3) * (x3 - x4)) / det;
            float u = -((x1 - x2) * (y1 - y3) - (y1 - y2) * (x1 - x3)) / det;
            if (t >= 0 && t <= 1 && u >= 0 && u <= 1) { // 线段内相交
                return Point2f(x1 + t * (x2 - x1), y1 + t * (y2 - y1));
            }
            else {
                return Point2f(x1 + t * (x2 - x1), y1 + t * (y2 - y1)); // 延长线相交
            }
        };

        intersections[0] = intersect(topH, leftV);     // 左上角
        intersections[1] = intersect(topH, rightV);    // 右上角
        intersections[2] = intersect(bottomH, rightV); // 右下角
        intersections[3] = intersect(bottomH, leftV);  // 左下角

        bool allValid = true;
        for (const auto& p : intersections) {
            if (p.x < 0 || p.y < 0 || p.x >= image.cols || p.y >= image.rows) {
                allValid = false;
                break;
            }
        }

        if (allValid) {
            foundAll = true;
        }
    }

    vector<Point> selected_corners;
    if (foundAll) {
        selected_corners = { Point(intersections[0]), Point(intersections[1]),
                             Point(intersections[2]), Point(intersections[3]) };
    }
    else {
        // 若霍夫变换失败,尝试轮廓检测
        Mat edges;
        Canny(gray, edges, 50, 150);

        // 增加形态学操作以增强边缘
        Mat kernel = getStructuringElement(MORPH_RECT, Size(3, 3));
        morphologyEx(edges, edges, MORPH_CLOSE, kernel);

        vector<vector<Point>> contours;
        findContours(edges, contours, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);

        for (const auto& contour : contours) {
            if (contour.size() == 4) {
                double area = contourArea(contour);
                if (area > 1000) { // 根据实际情况调整面积阈值
                    selected_corners = contour;
                    break;
                }
            }
        }

        if (selected_corners.empty()) {
            cerr << "Error: Could not find reliable board corners." << endl;
        }
    }

    processedImage = image.clone();
    for (const auto& p : selected_corners) {
        circle(processedImage, p, 10, Scalar(255, 0, 0), -1);
    }

    return selected_corners;
}

// 透视变换校正棋盘图像
Mat correctPerspective(const Mat& image, const vector<Point>& corners) {
    if (corners.size()!= 4) {
        cerr << "Error: Need 4 corners for perspective correction." << endl;
        return image.clone();
    }

    float side = 500.0;
    Point2f target_corners[4] = { Point2f(0, 0), Point2f(side - 1, 0),
                                 Point2f(side - 1, side - 1),
                                 Point2f(0, side - 1) };
    Point2f src_corners_array[4];
    for (int i = 0; i < 4; ++i) {
        src_corners_array[i] = Point2f(corners[i]);
    }

    Mat src_points(1, 4, CV_32FC2, src_corners_array);
    Mat dst_points(1, 4, CV_32FC2, target_corners);
    Mat homography = findHomography(src_points, dst_points, 0);
    Mat warped_image;
    warpPerspective(image, warped_image, homography, Size(side, side));

    return warped_image;
}

// 检测网格线
void detectGridLines(const Mat& image, vector<Vec4i>& horizontalLines, vector<Vec4i>& verticalLines) {
    Mat gray, blurred, edges;
    cvtColor(image, gray, COLOR_BGR2GRAY);
    GaussianBlur(gray, blurred, Size(5, 5), 0);
    Canny(blurred, edges, 50, 150);

    HoughLinesP(edges, horizontalLines, 1, CV_PI / 180, 50, 30, 10);
    verticalLines.clear();
    for (const auto& line : horizontalLines) {
        float angle = atan2(line[3] - line[1], line[2] - line[0]);
        if (abs(abs(angle) - CV_PI / 2) < CV_PI / 10) {
            verticalLines.push_back(line);
        }
    }
    horizontalLines.erase(remove_if(horizontalLines.begin(), horizontalLines.end(),
                                    [](const Vec4i& line) {
                                        float angle = atan2(line[3] - line[1], line[2] - line[0]);
                                        return abs(abs(angle) - CV_PI / 2) < CV_PI / 10;
                                    }),
                          horizontalLines.end());
}

// 识别网格
void identifyGrid(vector<Vec4i>& horizontalLines, vector<Vec4i>& verticalLines,
                  vector<Vec4i>& selectedHorizontal, vector<Vec4i>& selectedVertical) {
    sort(horizontalLines.begin(), horizontalLines.end(),
         [](const Vec4i& a, const Vec4i& b) {
             return (a[1] + a[3]) / 2 < (b[1] + b[3]) / 2;
         });
    sort(verticalLines.begin(), verticalLines.end(),
         [](const Vec4i& a, const Vec4i& b) {
             return (a[0] + a[2]) / 2 < (b[0] + b[2]) / 2;
         });

    selectedHorizontal.clear();
    selectedVertical.clear();

    float avgHorizontalSpacing = (horizontalLines.back()[1] + horizontalLines.back()[3]) / 2 -
                                 (horizontalLines.front()[1] + horizontalLines.front()[3]) / 2;
    avgHorizontalSpacing /= 18;
    float avgVerticalSpacing = (verticalLines.back()[0] + verticalLines.back()[2]) / 2 -
                               (verticalLines.front()[0] + verticalLines.front()[2]) / 2;
    avgVerticalSpacing /= 18;

    selectedHorizontal.push_back(horizontalLines.front());
    selectedVertical.push_back(verticalLines.front());
    for (size_t i = 1; i < horizontalLines.size(); ++i) {
        float currentY = (horizontalLines[i][1] + horizontalLines[i][3]) / 2;
        float lastY = (selectedHorizontal.back()[1] + selectedHorizontal.back()[3]) / 2;
        if (abs(currentY - lastY) >= avgHorizontalSpacing * 0.7 && selectedHorizontal.size() < 19) {
            selectedHorizontal.push_back(horizontalLines[i]);
        }
    }
    for (size_t i = 1; i < verticalLines.size(); ++i) {
        float currentX = (verticalLines[i][0] + verticalLines[i][2]) / 2;
        float lastX = (selectedVertical.back()[0] + selectedVertical.back()[2]) / 2;
        if (abs(currentX - lastX) >= avgVerticalSpacing * 0.7 && selectedVertical.size() < 19) {
            selectedVertical.push_back(verticalLines[i]);
        }
    }
}

// 检测交叉点处的棋子
vector<vector<char>> detectStonesAtIntersections(const Mat& image,
                                                const vector<Vec4i>& horizontalLines,
                                                const vector<Vec4i>& verticalLines) {
    int boardSize = 19;
    vector<vector<char>> boardState(boardSize, vector<char>(boardSize, '.'));

    if (horizontalLines.size() < boardSize || verticalLines.size() < boardSize) {
        cerr << "Not enough grid lines detected." << endl;
        return boardState;
    }

    Mat hsv;
    cvtColor(image, hsv, COLOR_BGR2HSV);
    for (int i = 0; i < boardSize; ++i) {
        for (int j = 0; j < boardSize; ++j) {
            Point2f intersection(
                (verticalLines[j][0] + verticalLines[j][2]) / 2.0f,
                (horizontalLines[i][1] + horizontalLines[i][3]) / 2.0f);

            if (intersection.x < 0 || intersection.x >= image.cols ||
                intersection.y < 0 || intersection.y >= image.rows)
                continue;

            Vec3b centerColor = hsv.at<Vec3b>(intersection.y, intersection.x);
            int v = centerColor[2];
            int s = centerColor[1];
            if (v < 80 && s > 30) { // 黑子
                boardState[i][j] = 'B';
            }
            else if (v > 180 && s < 80) { // 白子
                boardState[i][j] = 'W';
            }
        }
    }

    return boardState;
}

// 将棋盘状态转换为SGF格式
string boardStateToSGF(const vector<vector<char>>& boardState) {
    string sgfContent = "(;FF[4]GM[1]SZ[19]";
    string addBlack, addWhite;

    for (int y = 0; y < 19; ++y) {
        for (int x = 0; x < 19; ++x) {
            string coord = toSGFCoord(x, 18 - y);
            if (!coord.empty()) {
                if (boardState[y][x] == 'B') {
                    addBlack += "[" + coord + "]";
                }
                else if (boardState[y][x] == 'W') {
                    addWhite += "[" + coord + "]";
                }
            }
        }
    }

    if (!addBlack.empty())
        sgfContent += "AB" + addBlack;
    if (!addWhite.empty())
        sgfContent += "AW" + addWhite;
    sgfContent += ")";

    return sgfContent;
}

int main() {
    string imageFile = "go_board.jpg";
    Mat originalImage = imread(imageFile, IMREAD_COLOR);
    if (originalImage.empty()) {
        cerr << "Error: Could not open image." << endl;
        return -1;
    }

    Mat processedImage;
    vector<Point> corners = detectBoardCorners(originalImage, processedImage);

    Mat warpedImage;
    if (corners.size() == 4) {
        warpedImage = correctPerspective(originalImage, corners);
    }
    else {
        warpedImage = originalImage.clone();
        cerr << "Perspective correction skipped." << endl;
    }

    vector<Vec4i> horizontalLines, verticalLines;
    detectGridLines(warpedImage, horizontalLines, verticalLines);

    vector<Vec4i> selectedHorz, selectedVert;
    identifyGrid(horizontalLines, verticalLines, selectedHorz, selectedVert);

    vector<vector<char>> board = detectStonesAtIntersections(warpedImage, selectedHorz, selectedVert);
    string sgf = boardStateToSGF(board);

    ofstream sgfFile("current_state.s